8/17/2019 Summary Artificial Intelligence 1 - RuG
1/24
Summary: Artificial Intelligence 1 Ramon Meffert, 2015
Introduction
This summary is based on Artificial Intelligence: A Modern Approach (3rd edition), as taught by Arnold Meijsterin the course Artificial Intelligence 1. The chapters included in this summary (as per Meijster’s assignment)are:
• Chapters 1, 2 and 3 in their entirety;
• Chapter 4, excluding 4.2 and 4.5;
• Chapter 5, up to 5.4.3;
• Chapter 6, excluding pages 225 and 226;
• Chapters 7, 8, 9 and 13 in their entirety.
Contents
Contents 1
1 Introduction 3
1 What is AI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Acting humanly: The Turing Test approach . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Thinking humanly: The cognitive modeling approach . . . . . . . . . . . . . . . . . . . . 31.3 Thinking rationally: The “laws of thought” approach . . . . . . . . . . . . . . . . . . . . . 31.4 Acting rationally: The rational agent approach . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 The Foundations of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4 Neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.5 Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.6 Computer engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.7 Control theory and cybernetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.8 Linguistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 The History of Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1 The gestation of artificial intelligence (1943-1955) . . . . . . . . . . . . . . . . . . . . . . . 53.2 The birth of artificial intelligence (1956) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.3 Early enthusiasm, great expectations (1952-1969) . . . . . . . . . . . . . . . . . . . . . . . 53.4 A dose of reality (1966-1973) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.5 Knowledge-based systems: The key to power? (1969-1979) . . . . . . . . . . . . . . . . . . 6
3.6 AI becomes an industry (1980-present) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.7 The return of neural networks (1986-present) . . . . . . . . . . . . . . . . . . . . . . . . . 63.8 AI adopts the scientific method (1987-present) . . . . . . . . . . . . . . . . . . . . . . . . . 63.9 The emergence of intelligent agents (1995-present) . . . . . . . . . . . . . . . . . . . . . . 63.10 The availability of very large data sets (2001-present) . . . . . . . . . . . . . . . . . . . . . 6
4 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
8/17/2019 Summary Artificial Intelligence 1 - RuG
2/24
2 Intelligent Agents 7
1 Agents and Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Good Behavior: The Concept of Rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Omniscience, learning, and autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 The Nature of Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.1 Properties of task environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4 The Structure of Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1 Agent Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.2 Simple reflex agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.3 Model-based reflex agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.4 Goal-based agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.5 Utility-based agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.6 Learning Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.7 How the components of agent programs work . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Solving Problems by Searching 9
1 Problem-Solving Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.1 Well-defined problems and solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Formulating problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1 Toy problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Searching for Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1 Infrastructure for search algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Measuring problem-solving performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Uninformed Search Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.1 Breadth-first search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Uniform-cost search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.3 Depth-first search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.4 Depth-limited search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.5 Iterative deepening depth-first search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.6 Bidirectional search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.7 Comparing uninformed search strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Informed (Heuristic) Search Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.1 Greedy best-first search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.2 A* search: Minimizing the total estimated solution cost . . . . . . . . . . . . . . . . . . . 125.3 Memory-bounded heuristic search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.4 Learning to search better . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6 Heuristic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.1 The effect of heuristic accuracy on performance . . . . . . . . . . . . . . . . . . . . . . . . 136.2 Generating admissible heuristics from relaxed problems . . . . . . . . . . . . . . . . . . . 136.3 Generating admissible heuristics from subproblems: Pattern databases . . . . . . . . . . 136.4 Learning heuristics from experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Overview of pseudocode for agent types 14
Overview of pseudocode for search algorithms 15
Glossary 18
8/17/2019 Summary Artificial Intelligence 1 - RuG
3/24
Chapter 1: Introduction
Intelligence is trying to understand how we, ashumans, think; Artificial Intelligence is trying to build entities that can think.
1 What is AI?
Intelligence can be classified based on two bases:the way an agent acts and the way an agent thinks.Either can be done humanly or rationally. As thename implies, thinking or acting humanly meansthat an agent acts or thinks in a way that is simi-lar to humans. Rationality is an ideal measure of performance, based on some objective ideal.
1.1 Acting humanly: The Turing Test approach
The Turing Test was designed to provide a satis-factory operational definition of intelligence, and isdefined as follows:An intelligent machine must be able to:
• communicate successfully (natural languageprocessing);
• store what it knows or hears (knowledge rep-resentation);
• use this information to answer questions and
to draw new conclusions (automated reason-ing);
• adapt to new circumstances and to detect andextrapolate patters (machine learning).
An extended version of the Turing Test, known asthe total Turing Test, includes a video signal andthe possibility for the interrogator to pass physicalobjects the the machine. Therefore, to pass the totalTuring Test, a machine must also include computervision and robotics.
1.2 Thinking humanly: The cognitive modeling
approach
Another approach to creating intelligence is bystudying the human mind. Basically, there are threeways to do this: introspection (observing one’s ownthoughts); psychological experiments (observing aperson in action); and brain imaging (observing the brain in action). These methods are part of cog-nitive science, an interdisciplinary field combin-
ing experimental psychological data and computermodels.
1.3 Thinking rationally: The “laws of thought”
approach
Syllogisms provide patterns for argument struc-tures that are always correct, i.e. Socrates is aman; All men are mortal; Therefore, Socrates ismortal. These laws of thought, nowadays referredto as logic, was refined by 19th century logicians.
Through these refinements, it was possible to de-scribe relations between objects in the world objec-tively. The results of these refinements is called thelogicist tradition; within artificial intelligence, it at-tempts to create intelligent systems based on thesepremises.
1.4 Acting rationally: The rational agent ap-
proach
An agent is something that acts. A rational agentacts to achieve the best outcome, or when there is
uncertainty, the best expected outcome. However,since perfect rationality in complex systems is at best infeasible, perfect rationality is a good startingpoint for analysis. Limited Rationality is the im-plementation of rationality while keeping in mindthe computational demands, i.e. making sure thecomputations take less time at the cost of somequality of performance.
2 The Foundations of Artificial Intelli-gence
2.1 Philosophy
Philosophy has supplied some valuable conceptsthat are used in artificial intelligence. Many in-fluential philosophers, such as Descartes, Aristotleand Leibniz coined terms and theories that are stillused today. Rationalism, for example; the move-ment which advocates reasoning as the method forunderstanding the world. Dualism is the theorythat part of the human mind/soul/spirit is outsideof (the laws of) nature. Materialism is the opposite,
arguing that the brain’s operation according to thelaws of physics constitutes the mind. The empiri-cism movement argues that everything can be un-derstood by experiencing it. This is extended in theidea of induction, i.e. general rules are acquired byexposure to repeated associations between their el-ements. The doctrine of logical positivism holdsthat all knowledge can be characterized by logi-cal theories connected, ultimately, to observationsentences that correspond to sensory inputs. TheConfirmation Theory attempts to analyze the ac-quisition of knowledge from experience by defining
an explicit computational procedure for extractingknowledge from elementary experiences.
8/17/2019 Summary Artificial Intelligence 1 - RuG
4/24
2.2 Mathematics
In order for Artificial Intelligence to become a for-mal science, the theories founded by the logiciansrequired a level of mathematical formalization inthee fundamental areas: logic, computation, andprobability. At the basis of this lies the conceptof the algorithm; a series of operations based on
logic and mathematics to find a specific sort of out-put based on a specific sort of input. As Gödelhas shown, limits on deduction exist: his incom-pleteness theorem showed that in any formal the-ory as strong as the elementary theory of naturalnumbers (Peano arithmetics), there are true state-ments that are undecidable in the sense that theyhave no proof within the theory. This also meansthat not everything is computable; they cannot berepresented by algorithms. Even though it is im-possible to give a formal definition to this notion,it is generally accepted that any function that is
computable by the Turing Machine is by extensionalso computable in general (Church-Turing thesis).Tractability is another important concept, roughlystating that a problem must not take an exponen-tial amount of time compared to the amount of in-stances given as input. This means that a problemis intractable if this is the case. To recognize an in-tractable problem, the theory of NP-completenessprovides a method. Any problem class to whichthe class of NP-complete problems can be reducedis likely to be intractable. Probability also playsa big role in AI, outlining the chances of a certain
outcome for an event. Bayes’ Rule is an instance of a probability rule that plays a large role in AI.
2.3 Economics
Several important notions originate in the field of Economic science. For example, utility deals withthe mathematics of preferred outcomes. Decisiontheory combines probability theory and utility the-ory and provides a formal and complete frameworkfor decisions made under uncertainty. This is only
applicable in “large” economies where the actionsof individuals do not count significantly. If they docount, the domain of game theory applies: individ-ual actions have great consequences, and rationalagents should adopt policies that are (or at leastappear to be) randomized. Operations researchdeals with the situations in which a payoff is notimmediately clear, but only arises after a sequenceof actions. A specific class of formalized decisionproblems is called Markov decision processes. Formany applications, satisficing—making decisionsthat are “good enough,” rather than calculating the
optimal decision—have a better description of ac-tual human behavior.
2.4 Neuroscience
Neuroscience is the study of nervous systems, inparticular the brain. Although the exact workingsof the brain are still unclear, the fact that it facili-tates thought has been clear for thousands of year,as damage to the head leads to great mental inca-pacitation. The brain consists of neurons, simple
nerve cells which, in their large numbers, can leadto thought, action and consciousness.“Brains cause minds.” –John Searle
Comparing the number of nerve cells to somestatistical numbers of processing units in modernunits leads some people, futurists, to believe that asingularity is approaching: at this point, comput-ers reach a superhuman level of performance.
2.5 Psychology
Psychology concerns itself with the understanding
of the workings of the human mind. As one wayof creating intelligence is through mimicking hu-man intelligence, this field of study is an importantinfluence on AI. An initiating movement withinpsychology was behaviorism: this movement re- jected any theory involving mental processes on thegrounds that introspection could not provide reli-able evidence. Instead, they focused on objectivemeasures of of percepts leading to responses. Thisworked quite well for rats and pigeons, but not somuch for humans.
Cognitive psychology, which focuses on the brain as an information-processing device. Itreestablished the legitimacy of “mental” terms suchas beliefs and goals, arguing that they are justas scientific as any other terms. Kenneth Craikspecified the three key steps of a knowledge-basedagent: (1) translating the stimulus to an internalrepresentation; (2) manipulating of the represen-tation by cognitive processes; (3) translating theseinto action.
2.6 Computer engineering
Computer engineering facilitates the very existenceof artificial intelligence. The modern computerwas invented thrice, almost independently and al-most simultaneously. In World War II, many ad-vances were made in computing technology, result-ing in the firs operational, the first operational pro-grammable and the first electronic computer in con-sequence. The first programmable machine was aloom that used punch cards to store patters to bewoven. The software side of computing science hasgreatly facilitated AI as well, but AI has since made
some significant contributions to software, such asthe linked list data type.
8/17/2019 Summary Artificial Intelligence 1 - RuG
5/24
2.7 Control theory and cybernetics
Control theory entails the procedure in which anartifact (a non-living object in the real world) con-trols itself to maintain a stable state. Advances inmathematical and computational models of cogni-tion produced a spike in interest in cybernetics,highlighting the possibility of artificially intelligent
machines. The idea that a homeostatic device (i.e.a device maintaining a internal balance throughappropriate feedback loops) could achieve stableadaptive behavior was coined by W. Ross Ashby.Modern control theory has as its goal the design of systems that maximize an objective function overtime.
2.8 Linguistics
In 1957, B.F. Skinner published Verbal Behavior, a book explaining language using behaviorist sci-
ence. Noam Chomsky, having just published hisown theory in Syntactic Structures, wrote a reviewcriticizing Skinner’s work for not recognizing thenotion of creativity in language, which, for exam-ple, would explain how a child could understandand produce sentences it had not heard before. Be-cause Chomsky’s theories were so rigid, it providedthe necessary formality for the creation of the fieldof computational linguistics or natural languageprocessing. This has made it possible for comput-ers to “understand” language.
3 The History of Artificial Intelligence
3.1 The gestation of artificial intelligence (1943-
1955)
Hebbian learning is a system of learning basedon work done by Warren McCulloch and WalterPitts, who created a computational network basedon neurons, drawing from the basic physiology of the brain, a formal analysis of propositional logicdue to Russell and Whitehead and Turing’s the-ory of computation. It was a primitive neural net-
work capable of computing simple formulas andintroduced weighted edges. The first real neuralnetwork was implemented in 1950; it was calledSNARC.
3.2 The birth of artificial intelligence (1956)
A meeting of several influential people in AI oc-curred when a two-month workshop was organizedin the summer of 1956. In this workshop, the LogicTheorist, created by Simon and Newell, stole theshow by being a program that was able to reason,
proving many theorems in chapter 2 of Russell andWhitehead’s Principia Mathematica. This workshop
shows why AI has become a separate field; contraryto many fields that aim to pursue similar objectives,AI embraces the idea of duplicating human facul-ties such as creativity, self-improvement and lan-guage use.
3.3 Early enthusiasm, great expectations (1952-
1969)
Compared to the capabilities of computers at thetime, the early years of AI were quite fascinating.Seeing as most these machines could do was somesimple arithmetic, anything remotely clever wasreceived as an amazing feat of technology. Eventhough the programs were still very basic, Newelland Simon were able to formulate the physicalsymbol system hypothesis, stating that “a physi-cal symbol system has the necessary and sufficientmeans for general intelligent action.” A numberof AI programs were created, and the high-level
programming language LISP was created, whichremained the most widely-used programming lan-guage in AI for the next 30 years.
A certain set of programs worked on so-calledmicroworlds. These were limited domains onwhich operations could be done, such as solv-ing close-form calculus integration problems. Afamous example of a microworld program isSHRDLU, which was able to reason about a spa-tial environment and perform some simple actionson it.
Hebb’s learning methods were enhanced byBernie Widrow, who called his networks ada-lines, and by Frank Rosenblatt with is percep-trons. The perceptron convergence theorem saysthat the learning algorithm can adjust the connec-tion strengths of a perceptron to match any inputdata, provided such a match exists.
3.4 A dose of reality (1966-1973)
Following the enthusiasm and successes of earlyAI, some very hopeful predictions were made
about what AI would be able to do in the nearfuture. These predictions, however, turned out tomostly come true, but not quite within the time-frame imagined; it took, for example, 40 years in-stead of 10 to create a chess champion program andhave a significant mathematical theorem be solved by a computer. Another telling example of the highexpectations of AI clearly showed when a computerprogram meant to translate Russian documents toEnglish retranslated “The spirit is willing but theflesh is weak” as “The vodka is good but the meatis rotten.” It also became clear that many meth-
ods used before were actually relatively intractable,meaning that they did not scale well. For exam-
8/17/2019 Summary Artificial Intelligence 1 - RuG
6/24
8/17/2019 Summary Artificial Intelligence 1 - RuG
7/24
be important for the exam.
Chapter 2: Intelligent Agents
1 Agents and Environments
An agent is anything that can be viewed as perceiv-
ing its environment through sensors and actingupon that environment through actuators. Perceptsare the agent’s perceptual inputs at any given in-stant. An agent’s percept sequence is the completehistory of everything the agent has ever perceived.The agent function is the function that maps anygiven percept sequence to an action. This agentfunction is achieved through the use of the agentprogram; it is a concrete implementation, whereasthe agent function is the “mathematical” represen-tation. It is also possible to represent the agent pro-gram using a table. However, for most programs,
this table is virtually infinite.
2 Good Behavior: The Concept of Ratio-nality
A rational agent is an agent that does the rightthing—conceptually speaking, every entry in thetable for the agent function is filled out correctly.However, “doing the right thing” is a very vagueconcept. This is answered by considering the con-sequences of the agent’s actions. The actions mod-ify the environment states, and when the sequence
of states is desirable, the agent has performed well.The notion of desirability is capture by a perfor-mance measure that evaluates any given sequenceof environment states. It is better to design per-formance measures according to what one actuallywants in the environment, rather than according tohow one thinks the agent should behave.
2.1 Rationality
The definition of a rational agent is as follows:
“For each possible percept sequence, a ra-tional agent should select an action that isexpected to maximize its performance mea-sure, given the evidence provided by the per-cept sequence and whatever built-in knowl-edge the agent has.”
2.2 Omniscience, learning, and autonomy
Keep in mind that rationality does not entail om-niscience. Whereas an omniscient agent could pre-dict its environment perfectly, a rational agent canonly make reasonable assumptions. Rationalitymaximizes expected performance, while perfectionmaximizes actual performance. Hence, informa-tion gathering—doing actions in order to modifyfuture percepts—is an important part of rationality.Exploration is also an important form of informa-tion gathering. Besides gathering information, itis also important that an agent learns as much as
possible from what is perceives. The prior knowl-edge is, generally, simply not enough to performwell. The extent to which the agent relies on itsprior knowledge rather than on its own percepts,we say that the agent lacks autonomy. A rationalagent should be autonomous and effectively inde-pendent.
3 The Nature of Environments
The task environment is the environment in whicha task is to be executed. This can be defined using
the PEAS description (Performance, Environment,Actuators, Sensors). The performance measure de-scribes the goals for the agent; the environment de-scribes the kind of environment the agent will beacting in; the actuators describes the available actu-ators the agent has, i.e. in what ways the agent canmodify its environment; the sensors describes thesensors that are available to the agent, i.e. the wayit can receive input.
In contrast, some software agents or softbotsexist in rich, unlimited domains.
8/17/2019 Summary Artificial Intelligence 1 - RuG
8/24
3.1 Properties of task environments
Although there is a vast number of task environments that might arise in AI, they can be categorized alonga fairly small number of dimensions:
fully observable an agent’s sensors give it access to the complete (relevant) state of theenvironment at each point in time
partially observable an agent’s sensors might be noisy, or part of the environment might sim-ply not be visible to the agent’s sensors
unobservable the agent has no sensors
single agent the agent is the only agent to be taken into considerationmulti-agent the agent might not be the only agent in the environment, but this de-
pends on the definition of agent
competitive the performance measure of another agent decreases the performancemeasure of the own agent
cooperative the agents share a (somewhat) common performance measure
deterministic the next state of the environment is completely determined by the currentstate and the action executed by the agent
stochastic not deterministicuncertain an environment is uncertain if it is not fully observable or not fully deter-ministic
nondeterministic an environment is nondeterministic if actions are characterized by theirpossible outcomes, but no probabilities are attached to them
episodic in an atomic task environment, the agent’s experience is divided intoatomic episodes, in which it receives a percept and then performs a singleaction, which does not depend on the actions taken in previous episodes.
sequential the current decision could affect all future decisions.
static the environment does not change while the agent is deliberatingdynamic it does
semi-dynamic the environment does not change, but the performance score does
discrete finite number of distinct states and set of percepts and actionscontinuous infinite number of distinct states or set of percepts or actions
known the agent’s knowledge of all details of the environment represent the en-tire environment
unknown it does not
4 The Structure of Agents
The agent program requires some kind of architec-ture to run on; thus, the agent is comprised of thecombination of the agent program and the architec-
ture.
4.1 Agent Programs
There are multiple types of agent programs, somemore effective than others. A relatively naive ap-proach is the table-drive agent: it simply stores ev-ery percept with a corresponding action. As onemight expect, these are, for most tasks, rather infea-
sible; an automated taxi would generate a lookuptable with over 10250,000,000,000 entries.
4.2 Simple reflex agents
Simple reflex agents these agents select actions onthe basis of the current percept, ignoring the restof the percept history. They work using condition-action rules, written as if X then Y. Simple reflex
agents might run into infinite loops; one solutionto this problem is to randomize (certain aspects of)the agent’s actions.
4.3 Model-based reflex agents
A method of handling partial observability is themaintenance of some internal state that dependson the percept history and thereby reflects at leastsome of the unobserved aspects of the current state.In order for this to work, the agent needs some kindof model of the world. This can be very simple (e.g.
Boolean circuits) or very complicated (e.g. com-plete scientific theories). An agent that uses such
8/17/2019 Summary Artificial Intelligence 1 - RuG
9/24
a model is called a model-based agent.
4.4 Goal-based agents
A goal is necessary for making decisions; evenwhen an agent knows something about its environ-ment, making a choice is arbitrary unless there is agoal. The subfields search and planning focus on
these kinds of problems.
4.5 Utility-based agents
While goals provide direction and often a path tothe solution, they do not necessarily generate the best solution. A way to solve this is by introducingutility, which assigns values to different options—it is basically trying to optimize the utility func-tion. This can bee seen in analogy with a taxi driverand their client; the better the route, the happier
the client. Because agents generally cannot be om-niscient, the expected utility is used.
4.6 Learning Agents
A learning agent relies on four basic principles inorder to operate: a learning element, a perfor-mance element, a critic and a problem generator.The learning element is responsible for making im-provements to the performance element by takinginput from the critic. The problem generator triesto find new and informative experiences, as to max-
imize the former elements. A reward or penalty isused as a form of direct feedback on the quality of the agent’s behavior.
4.7 How the components of agent programs
work
There are roughly three ways of representing theenvironment that the agent inhabits. Orderedon terms of increasing complexity and expres-sive power (expressiveness): atomic, factored andstructured. In an atomic representation, each state
of the world is indivisible—it has no internal struc-ture. A factored representation splits up each stateinto a fixed set of variables or attributes, each of which can have a value. Structured representationallow for explicit descriptions of relationships be-tween objects.
Chapter 3: Solving Problems by
Searching
Problem-solving agents are a kind of goal-basedagents, using an atomic representation. Goal-based
agents that use a factored or structured representa-tion are usually called planning agents. This chap-ters aims to precisely define problems and theirsolutions. Uninformed and informed algorithmsare considered—the first being algorithms that aregiven no information about the problem other thanits definition, and the latter require some guidance.
1 Problem-Solving Agents
Goals help organize behavior by limiting the objec-tives that the agent is trying to achieve and hencethe actions it needs to consider. Goal formulationis based on the current situation and the agent’sperformance measure. Problem formulation is theprocess of deciding what actions and states to con-sider, given a goal. An agent with several immedi-ate options of unknown value can decide what todo by fist examining future actions that eventually
lead to to states of known value.For problem-solving agents, we assume that the
environment is observable, discrete, known and de-terministic. The process of looking for a sequenceof actions that reaches the goal is called search. Theintended output is the solution. Once a solution isfound, the execution phase is entered. The agent,while executing the solution, ignores the perceptsas they are known in advance. This is called anopen-loop system.
1.1 Well-defined problems and solutions
A problem can be defined by five components: theinitial state; the possible actions that are applica-ble in a state s; the transition model, i.e. a descrip-tion of what each action does; the goal test to checkwhether the goal has been reached; and the pathcost function that assigns a numeric cost to eachpath. Any applicable state is a successor. Together,the initial state, actions and transition model im-plicitly define the state space of the problem, whichforms a directed network or graph. A path in thestate space is a sequence of states connected by asequence of actions. The step cost of taking actiona in state s to reach state s’ is denoted by c(s, a, s).A solution to a problem is an action sequence thatleads from the initial state to a goal state. an op-timal solution has the lowest path cost among allsolutions.
1.2 Formulating problems
The process of removing detail from a representa-tion is called abstraction. The abstraction is valid if
we can expand any abstract solution into a solutionin the more detailed world.
8/17/2019 Summary Artificial Intelligence 1 - RuG
10/24
2 Example Problems
A distinction is made between toy problems andreal-world problems. A toy problem is intended toillustrate or exercise various problem-solving meth-ods. A real-world problem is one whose solutionspeople actually care about.1
2.1 Toy problems
Toy problems have either an incremental formula-tion or a complete-state formulation.
3 Searching for Solutions
To represent a solution to a problem, a search treeis used. At the root is the initial state, the branchesare actions and the nodes correspond to states inthe state space of the problem. The generation of new states from the current state is called expan-
sion: The parent node generates some child nodes.Leaf nodes are nodes with no children. The set of all leaf nodes available for expansion at any givenpoint is called the frontier (sometimes referred toas open list). The way an algorithm moves throughthe tree is called the search strategy. A state can bea repeated state when it occurs multiple times, e.g. because of a loopy path. Loopy paths are a spe-cial case of the more general concept of redundantpaths, which exist whenever there is more than oneway to get from one state to another. In games(but not exclusively in games), rectangular grids
are used. To avoid visiting states twice, the tree-search algorithm is augmented with the exploredset (also known as the closed list), becoming thegraph-search algorithm. graph-search separatesthe state-space graph into the explored region andthe unexplored region.
3.1 Infrastructure for search algorithms
Search algorithms require a data structure to keeptrack of the search tree that is being constructed.For each node n of the tree, we have a structure
that contains four components:
• n.state: the state in the space to which thenode corresponds;
• n.parent: the node in the search tree that gen-erated this node;
• n.action: the action that was applied to theparent to generate the node;
• n.path-cost: the cost, traditionally denoted by g(n), of the path from the initial state to
the node, as indicated by the parent pointers.
Child nodes are generated using the function
. . . . . . . . . . . . . .child-node.The appropriate data structure for nodes is a
queue. The operations on a queue are as follows:
• empty(queue): returns true only if there are nomere elements in the queue;
• pop(queue): removes the first element of the
queue and returns it;
• insert(element, queue): inserts an element andreturns the resulting queue.
Queues are characterized by the order in whichthey store the inserted nodes. Three comments arethe first-in, first-out or FIFO queue, which pops theoldest element; the last-in first-out or LIFO queue(also known as stack), which pops the newest ele-ment of the queue; and the priority queue, whichpops the element of the queue with the highest pri-ority according to some ordering function. To al-
low for efficient checking of repeated states, it ispossible to use a hash table (which remembers thevisited states) or by using the more general methodof storing repeated states in canonical form, whichmeans that logically equivalent states should mapto the same data structure.
3.2 Measuring problem-solving performance
An algorithm’s performance can be evaluated infour ways:
• completeness: does the algorithm alwaysfind a solution if one exists?
• optimality: does the algorithm find the opti-mal solution (lowest path cost)?
• time complexity: how long does it take tofind a solution?
• space complexity: how much memory isneeded to perform the search?
In AI, complexity is expressed in terms of threequantities:
• b, the branching factor or maximum numberof successors of any node;
• d, the depth of the shallowest goal node;
• m, the maximum length of any path in thestate space.
Time is measured in number of nodes generated,memory in the number of nodes stored in memory.
Specifically for search algorithms, either searchcost is used, which typically depends on time com-plexity, or total cost, which combines the search
cost and the path cost, to evaluate effectiveness.1This is a direct quote. I don’t they intend to imply toy problems have useless outcomes...
8/17/2019 Summary Artificial Intelligence 1 - RuG
11/24
4 Uninformed Search Strategies
Uninformed search or blind search are searchesthat are not provided with any information be-sides the information in the problem definition.This in contrast with informed search or heuris-tic search, which know whether a non-goal state ismore promising than another.
4.1 Breadth-first search
Breadth-first search is a search algorithm in whichevery node on a level is expanded before the nodeson the next level are expanded, starting at the rootnote. It uses a FIFO queue. As for its performance:
• it is complete;
• it is not necessarily optimal, unless the cost is
a nondecreasing function of the depth of thenode;
• its time complexity is poor: O(bd);
• its space complexity is similarly poor: O(bd).
This shows that exponential-complexity searchproblems cannot be solved by uninformed methodsfor any but the smallest instances.
4.2 Uniform-cost search
Uniform-cost search is basically breadth-firstsearch, but instead of expanding the shallowestnodes, it expands the node with the lowest pathcost. Another difference is that the goal test is ap-plied to a node when it is selected for expansionrather than when it is first generated. The last dif-ference is that a test case is added in case a betterpath is found to a node currently on the frontier.As for its performance:
• it is complete, as long as step cost>
;
• it is generally optimal;
• its time complexity is O(b1+(C∗/)), which can be greater than bd if the step costs vary, butotherwise is equal to O(bd+1), cf. breadth-first search;
• its space complexity is the same as its timecomplexity.
Uniform-cost search expands more nodes in orderof their optimal path cost.
4.3 Depth-first search
Depth-first search always expands the deepestnode in the current frontier of the search tree, i.e.it gets deeper in the tree until it finds a leaf node.It uses a LIFO queue: the most recently generatednode is chosen for expansion. Its implementationis basically the same as the graph-search algo-
rithm. Alternatively, it can be implemented recur-sively. The properties differ for the version used.For graph-search:
• it is complete in finite spaces;
• it is nonoptimal;
• its time complexity is bound by the size of thestate space (can be infinite);
• its space complexity is the same as breadth-first
search.
for tree-search (the recursive implementation):
• it is not complete;
• it is nonoptimal;
• its time complexity is O(bm)
• its space complexity is O(b ·m)
A variant of depth-first search is backtrackingsearch, which uses even less memory: only onesuccessor is generated at a time instead of all suc-cessors. This means that only O(m) memory isneeded, and because it modifies rather than copiesthe current state description, it only requires onestate space.
4.4 Depth-limited search
As depth-first search fails unpreventably in infinitestate spaces, it is useful to have a depth limit l , even
though this introduces even more incompleteness.It is not optimal if l = d (with d being the depthlimit). As for its performance:
• it is incomplete, as is depth-first search;
• it is nonoptimal unless l = d;
• its time complexity is O(bl );
• its space complexity is O(b · l)
Sometimes, it is possible to determine the depthlimit from some information.
8/17/2019 Summary Artificial Intelligence 1 - RuG
12/24
4.5 Iterative deepening depth-first search
Iterative deepening search or Iterative deepeningdepth-first search is a general strategy, often used incombination with depth-first tree search that findsthe best depth limit. As for its performance:
• it is complete when the branching factor is fi-
nite;
• it is optimal when the cost is a nondecreasingfunction of the depth of the node;
• its time complexity is O(bd);
• its space complexity is O(b · d)
In general, iterative deepening is the preferred un-informed search method when the each space islarge and the depth of the solution is not known. It-
erative lengthening works on the same principles,except for that it uses increasing path-cost limits in-stead of increasing depth limits. It is not effective.
4.6 Bidirectional search
Bidirectional search is basically a search with tworoot nodes, motivated by the fact that bd/2 + bd/2 bd. A solution is found when the frontiers intersect.As for its performance:
• it is complete when the branching factor is fi-nite;
• it is optimal when the cost is a nondecreasingfunction of the depth of the node;
• its time complexity is O(bd/2);
• its space complexity is O(bd/2) (can be halved by using iterative deepening).
As bi-directional search requires searching back-ward. To accomplish this, the predecessors of astate x must be all those states that have x as asuccessor. When all actions in the state space are
reversible, the predecessors of x are just its succes-sors.
4.7 Comparing uninformed search strategies
BFS Uniform-cost DFS Depth-limited Iterative Deepening Bidirectional
Complete Yes Yes No No Yes Yes
Time Complexity O(bd) O(b1+(C∗/)) O(bm) O(bl) O(bd) O(bd/2)
Space Complexity O(bd) O(b1+(C∗/)) O(bm) O(bl) O(bd) O(bd/2)Optimal Yes Yes No No Yes Yes
5 Informed (Heuristic) Search Strate-gies
Informed search strategies use problem-specificknowledge beyond its definition. They find so-lutions more efficiently than uninformed searchstrategies. The general approach for informedsearch is called best-first search, i.e. graph-searchor tree-search in which a node is selected for ex-pansion based on an evaluation function f (n), acost estimate. A component of this function is h(n),
a heuristic function. h(n) = the estimated cost of the cheapest path from the state at node n to a goalstate. If n is a goal node, h(n) = 0.
5.1 Greedy best-first search
Greedy best-first search tries to expand the nodethat is closest to the goal, on the grounds thatthis is likely to lead to a solution quickly. Thus, f (n) = h(n). This can be used with the straight-line distance heuristic. It is called greedy because
at every step, it tries to get as close to the goal as itcan.
5.2 A* search: Minimizing the total estimated
solution cost
A* search evaluates nodes by combining the cost toreach the node and the cost to get from the nodeto the goal: f (n) = g(n) + h(n): the estimated costof the cheapest solution through n. A* search iscomplete and optimal.
Conditions for optimality: Admissibility and con-
sistency A heuristic is an admissible heuristic if the heuristic never overestimates tho cost to reachthe goal. These are by their nature optimistic be-cause they think the cost of solving the problemis less than it actually is. Straight-line distanceis an admissible heuristic. A heuristic is consis-tent or monotonous if, for every node n and ev-ery successor n’ of n generated by any action a,the estimated cost of reaching the goal from n isno greater than the step cost of getting to n’ plusthe estimated cost of reaching the goal from n’:h(n) ≤ c(n, a, n) + h(n). This is a form of the gen-
eral triangle inequality, which stipulates that eachside of a tringale cannot be longer than the sum of
8/17/2019 Summary Artificial Intelligence 1 - RuG
13/24
the other two side. Here, the triangle is formed byn, n’ and Gn closest to n.
Optimality of A* The tree-search version of A*is optimal if h(n) is admissible, while the graph-search version is optimal if h(n) is consistent. If h(n) is consistent, then the values of f (n) alongany path are nondecreasing. Whenever A* selectsa node for expansion, the optimal path has beenfound. Whenever the costs are nondecreasing, itis possible to draw contours like in a topographicmap, as any node next to another is either higheror equal to the current node. Pruning occurs when, because the heuristic is admissible, subtrees can beignored. Eliminating possibilities from consider-ation without having to examine them is impor-tant in AI. A* is optimally efficient for any givenconsistent heuristic: no other optimal algorithm isguaranteed to expand fewer nodes than A*, becauseany algorithm that does not expand all nodes with f (n) < C∗ runs the risk of missing the optimal so-lution. The absolute error of the heuristic is definedas ∆ ≡ h ∗−h, where h∗ os the actual cost of gettingfrom the root to the relative error is the defined as ≡ (h ∗ −h)/h∗.
5.3 Memory-bounded heuristic search
To reduce memory requirements for A*, the sim-plest way is to adapt the idea of iterative deep-ening the the heuristic search context, resulting inthe iterative-deepening A* (IDA*) algorithm. Themain difference between IDA* and standard itera-tive deepening is that the cutoff used is the f -costrather than the depth. Recursive best-first search isa simple recursive algorithm that attempts to oper-ation of standard best-first search, but using onlylinear space. It keeps track of the best alterna-tive path available from any ancestor of the cur-rent node. RBFS replaces the f-value of each nodealong the path with a backed-up value—the bestf-value of its children. In this way, RBS remembersthe f -value of the best leaf in the forgotten sub-
tree and can therefore decide whether it’s worth re-expanding the subtree at some later time. It is sen-sible to have A* use all available memory. Two algo-rithms that do this are MA* (memory-bounded A*)and SMA* (simplified MA*). SMA* performs A*until memory is full, and then drops the worst leaf node (i.e. the one with the highest f -value.) Theforgotten value is backed up in its parent. The com-plete algorithm is too complicated to reproduce, but a subtlety worth mentioning is that SMA* ex-pands the newest best leaf and deletes the oldestworst leaf. SMA* is complete if there is any reach-
able solution, and it is optimal if any optimal so-lution is reachable. A problem for SMA* is that
memory limitations can make a problem intractablefrom the point of view of computation time.
5.4 Learning to search better
Meta-level state space allows an agent to learn howto search better. Each state in a meta-level state
space captures the internal state of a program thatis searching in an object-level state space. Meta-level learning captures mistakes made by algo-rithms, and attempts to avoid exploring unpromis-ing subtrees. The goal of learning is to minimizethe total cost of problem solving.
6 Heuristic Functions
The Manhattan distance or city block distance isthe sum of distances from tiles on a rectangular grid
from one point to another.
6.1 The effect of heuristic accuracy on perfor-
mance
One way to characterize the quality of a heuristicis the effective branching factor b* . If the totalnumber of nodes generated by A* for a particu-lar problem N and the solution depth is d, then b* is the branching factor that a uniform tree wouldhave to have in order to contain N + 1 nodes. Thus:N + 1 = 1 + b ∗+(b∗)2 + · · · + (b∗)d.
A heuristic dominates another heuristic if oneheuristic is always better than another, that is, forany node n: h2(n) > h1(n).
6.2 Generating admissible heuristics from re-
laxed problems
A problem with fewer restrictions on the actionsis called a relaxed problem. The cost of an opti-mal solution to a relaxed problem is an admissibleheuristic for the original problem. It is also by de-
fault consistent
6.3 Generating admissible heuristics from sub-
problems: Pattern databases
Admissible heuristics can also be derived from thesolution cost of a subproblem of a given problem.The cost for a solution to a subproblem is a lower bound on the cost of the complete problem. Theidea behind pattern databases is to store the ex-act solution costs for every possible subproblem
instance. Disjoint pattern databases are patterndatabases that have no overlapping entries.
8/17/2019 Summary Artificial Intelligence 1 - RuG
14/24
6.4 Learning heuristics from experience
It is possible (with luck) for a learning agent toconstruct an admissible heuristic using subprob-
lems. However, inductive learning methods work best when supplied with features of a state a thatare relevant to predicting the state’s value, ratherthan with just the raw state description.
Overview of pseudocode for agent types
Table-driven agent
function table-driven-agent( percept) returns an actionpersistent: percepts, a sequence, initially empty
table, a table of actions, indexed by percept sequences, initially fully specified
append percept to the end of perceptsaction ← lookup( percepts, table)return action
Simple reflex agent
function simple-reflex-agent( percept)persistent: rules, a set of condition-action rules
state ← interpret-input( percept)rule ← rule-match(state, rules)action ← rule.actionreturn action
Model-based reflex agent
function model-based-reflex-agent( percept) returns an action
persistent: state, the agent’s current conception of the world statemodel, a description of how the next state depends on current state and actionrules, a set of condition-action rulesaction, the most recent action, initially none
state ← update-state(state, action, percept, model)rule ← rule-match(state, rules)action ← rule.actionreturn action
Model-based reflex agent
function model-based-reflex-agent( percept) returns an actionpersistent: seq, an action sequence, initially empty
state, some description of the current world state goal, a goal, initially null problem, a problem formulation
state ← update-state(state, percept)if seq is empty then
goal ← formulate-goal(state) problem ← formulate-problem(state, goal)seq ← search( problem)
action ← first(seq)
8/17/2019 Summary Artificial Intelligence 1 - RuG
15/24
seq ← rest(seq)return action
Simple problem-solving agent
function simple-problem-solving-agent( percept) returns an actionpersistent: seq, an action sequence, initially empty
state, some description of the current world state goal, a goal, initially null problem, a problem formulation
state ← update-state(state, percept)if seq is empty then
goal ← formulate-goal(state) problem ← formulate-problem(state, goal)seq ← search( problem)if seq = failure then return a null action
action ← first(seq)seq ← rest(seq)
return action
Overview of pseudocode for search algorithms
Tree search
function tree-search( percept) returns a solution, or failureinitialize the frontier using the initial state of problemloop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontierif the leaf node contains a goal state then return the corresponding solutionexpand the chosen node, adding the resulting nodes to the frontier
Graph search or depth-limited graph search
function graph-search( percept) returns a solution, or failureinitialize the frontier using the initial state of probleminitialize the explored set to be empty
loop do
if the frontier is empty then return failurechoose a leaf node and remove it from the frontierif the leaf node contains a goal state then return the corresponding solutionadd the node to the explored setexpand the chosen node, adding the resulting nodes to the frontieradd the node to the explored set
only if not in the frontier or explored set
The child node function
function child-node( problem, parent, action) returns a nodereturn a node with
state = problem.result( parent.state, action),
8/17/2019 Summary Artificial Intelligence 1 - RuG
16/24
parent = parent, action = action,path-cost = parent.path-cost + problem.step-cost( parent.state, action)
Breadth-first search
function breadth-first-search( problem) returns a solution, or failurenode ← a node with state = problem.initial-state, path-cost = 0
if problem.goal-test(node.state) then return solution(node) frontier ← a FIFO queue with node as the only elementexplored ← an empty setloop do
if empty( frontier) then return failurenode ← pop( frontier) /*chooses the shallowest node in frontier*/ add node.state to exploredfor each action in problem.actions(node.state) do
child ← child-node( problem, node, action)if child.state is not in explored or frontier then
if problem.goal-test(child.state) then return solution(child) frontier ← insert(child, frontier)
Uniform-cost search
function uniform-cost-search( problem) returns a solution, or failurenode ← a node with state = problem.initial-state, path-cost = 0 frontier ← a priority queue ordered by path-cost, with node as the only elementexplored ← an empty setloop do
if empty( frontier) then return failurenode ← pop( frontier) /*chooses lowest-cost node in frontier*/ if problem.goal-test(node.state) then return solution(node)
add node.state
to exploredfor each action in problem.actions(node.state) dochild ← child-node( problem, node, action)if child.state is not in explored or frontier then
frontier ← insert(child, frontier)else if child.state is in frontier with higher path-cost then
replace that frontier node with child
Depth-limited tree search
function depth-limited search( problem, limit) returns a solution, of failure/cutoff return recursive-dls(make-node( problem.initial-state), problem, limit)
function recursive-dls(node, problem, limit) returns a solution, or failure/cutoff if problem.goal-test(node.state) then return solution(node)else if limit = 0 then return cutoff else
cutoff-occured ← falsefor each action in problem.actions(node.state) do
child ← child-node( problem, node, action)result ← recursive-dls(child, problem, limit - 1)if result = cutoff then cutoff-occured gets trueelse if results = failure then return results
if cutoff-occured then return cutoff else return failure
8/17/2019 Summary Artificial Intelligence 1 - RuG
17/24
Iterative deepening search
function iterative-deepening-search( problem) returns a solution, or failurefor depth = 0 to ∞ do
result ← depth-limited-search( problem, depth)if result = cutoff then return result
Best-first search
function best-first-search( problem) returns a solution, or failurenode ← a node with state = problem.initial-state, path-cost = 0 frontier ← a priority queue ordered by f(n), with node as the only elementexplored ← an empty setloop do
if empty( frontier) then return failurenode ← pop( frontier) /*chooses lowest-cost node in frontier*/ if problem.goal-test(node.state) then return solution(node)add node.state to exploredfor each action in problem.actions(node.state) do
child ← child-node( problem, node, action)if child.state is not in explored or frontier then
frontier ← insert(child, frontier)else if child.state is in frontier with higher path-cost then
replace that frontier node with child
A* search
function best-first-search( problem) returns a solution, or failurenode ← a node with state = problem.initial-state, path-cost = 0 frontier ← a priority queue ordered by f(n) = g(n) + h(n), with node as the only elementexplored ← an empty setloop do
if empty( frontier) then return failurenode ← pop( frontier) /*chooses lowest-cost node in frontier*/ if problem.goal-test(node.state) then return solution(node)add node.state to exploredfor each action in problem.actions(node.state) do
child ← child-node( problem, node, action)if child.state is not in explored or frontier then
frontier ← insert(child, frontier)else if child.state is in frontier with higher path-cost then
replace that frontier node with child
8/17/2019 Summary Artificial Intelligence 1 - RuG
18/24
Glossary
1 Introduction
1.1 What is AI?
to act humanly to act in a way that resembles the way humans act
to act rationally to act in a way that maximizes some (objective) performance measureto think humanly to think in a way that resembles the way humans thinkto think rationally to think in a way that maximizes some (objective) performance measure
rationality an ideal measure of performance, based on some objective idealTuring test a test to measure intelligence; if an AI passes it, it is considered intelligent
natural language
processing to understand and produce spoken language
knowledge
representation storage of knowledge and knowledge acquisition
automated reasoning use the knowledge representation to answer questions and to draw newconclusions
machine learning to adapt to new circumstances and detect and extrapolate patterns
total Turing test the Turing test, but including a video signal and requiring the manipula-tion of physical objects
computer vision the processing and understanding of visual information from the realworld
robotics a medium for an agent to modify its environmentcognitive science an interdisciplinary field combining experimental psychological data and
computer modelssyllogisms patterns for argument structures, e.g. if A = B and B = C, then A = C
logic the study of the laws of thoughtlogicist tradition within AI, the attempt to create intelligent agents based on logic
agent something that actsrational agent an agent that acts to achieve the best outcome
limited rationality the implementation of rationality while keeping in mind the computa-tional demands
1.2 The Foundations of Artificial Intelligence
rationalism a movement that advocates reasoning as the method for understandingthe world
dualism the theory that part of the human mind/soul/spirit is outside of (the lawsof) nature
materialism the theory that argues that the brain’s operation according to the laws of physics constitutes the mind
empiricism a movement that argues that everything can be understood by experienc-ing it
induction the idea that general rules are acquired by exposure to repeated associa-tions between elements
logical positivism a doctrine that holds that all knowledge can be characterized by logicaltheories connected, ultimately, to observation sentences that correspondto sensory inputs
confirmation theory an attempt to analyze the acquisition of knowledge from experience bydefining an explicit computational procedure for extracting knowledgefrom elementary experiences
algorithm a series of operations based on logic and mathematics to find a specificsort of output based on a specific sort of input
incompleteness theorem any formal theory as strong as the elemental theory of natural numbersthere are true statements that are undecidable.
8/17/2019 Summary Artificial Intelligence 1 - RuG
19/24
computable something is computable if it can be represented by an algorithmtractability a problem must not take too long, otherwise it is intractableprobability the changes of a certain outcome for an event
utility the mathematics of preferred outcomesdecision theory a combination of probability theory and utility theory that provides a
formal and complete framework for decisions made under uncertainty (inlarge domains)
game theory decision theory in small domains: individual actions have great conse-
quences, and rational agents should adopt policies that are or appear to be randomized
operations research research that deals with situations with the situations in which a payoff isnot immediately clear
satisficing to make decisions that are “good enough,” rather than calculating theoptimal decision
neuroscience the study of nervous systems, in particular the brainneurons simple nerve cells which, in their large numbers, can lead to thought,
action and consciousnesssingularity the point at which computers reach a superhuman level of performance
behaviorism a movement that rejected any theory involving mental processes on thegrounds that introspection could not provide reliable evidence
cognitive psychology the branch of psychology that focuses on the brain as an information pro-cessing device
control theory the study of having an artifact (a non-living object in the real world) con-trolling itself to main a stable internal state
homeostatic having an internal balancecomputational
linguistics see natural language processing
1.3 The History of Artificial Intelligence
Hebbian learning a system of artificial learning based on a computational network based on
neurons, drawing from the basic physiology of the brain, logic, and thetheory of computation
physical symbol system
hypothesis a physical symbol system has the necessary and sufficient means for gen-
eral intelligent actionLISP one of the first high-level programming languages, mainly used by early
AI programmersmicro-worlds limited (computational) domains on which operations could be performed
adalines an enhanced version of Hebb’s learning methodsperceptron convergence
theorem the learning algorithm can adjust the connection strengths of a perceptron
to match any inout data, provided such a match existsmachine evolution another name for machine learning
genetic algorithms using inspiration from biological meiosis and mutation to have differentversions of algorithms compete, and thus, by means of “artificial selec-tion” find a well-performing version of an algorithm
weak methods methods that do not scale wellexpert systems systems putting from a large knowledge based, specialized to solve very
specific problemscertainty factors a form of calculus of certainty factors
frames assembling facts about particular object- and event types and arrangingthe types into a large taxonomic hierarchy, analogous to a biological tax-onomy
back-propagation
learning neural networks that learn using a comparison of the input and the output
to weigh the edgesconnectionist models the kind of models back-propagation learning creates
8/17/2019 Summary Artificial Intelligence 1 - RuG
20/24
hidden Markov models some model which relies heavily on mathematical rigor and statisticalanalysis
data mining a probabilistic approach to neural networksbayesian network a formalism invented to allow efficient representation of, and rigorous
reasoning with, uncertain knowledgehuman-level AI an AI with human-like intelligence
artificial general
intelligence a subfield of AI which looks to find a universal algorithm for learning and
acting in any environmentfriendly ai a concern regarding the creation of AI
2 Intelligent Agents
2.1 Agents and Environments
agent everything that can be viewed as perceiving its environment through sen-sors and acting upon that environment through actuators
percepts the agent’s perceptual inputs at any given instantpercept sequence the complete history of everything the agent has ever perceived
agent function the function that maps any given percept sequence to an action; a mathe-matical/symbolic representation
agent program the means to achieve the agent function, i.e. the actual implementation
2.2 Good Behavior: The Concept of Rationality
rational agent for each possible percept sequence, a rational agent should select an actionthat is expected to maximize its performance measure, given the evidenceprovided by the percept sequence and whatever built-in knowledge theagent has.
omniscience having complete knowledge of every state of ones environment and inter-
nal stateinformation gathering doing actions in order to modify future perceptsexploration moving in ones environment to acquire knowledge about said environ-
ment
2.3 The Nature of Environments
fully observable an agent’s sensors give it access to the complete (relevant) state of theenvironment at each point in time
partially observable an agent’s sensors might be noisy, or part of the environment might sim-ply not be visible to the agent’s sensors
unobservable the agent has no sensors
single agent the agent is the only agent to be taken into considerationmulti-agent the agent might not be the only agent in the environment, but this de-
pends on the definition of agentcompetitive the performance measure of another agent decreases the performance
measure of the own agentcooperative the agents share a (somewhat) common performance measure
deterministic the next state of the environment is completely determined by the currentstate and the action executed by the agent
stochastic not deterministicuncertain an environment is uncertain if it is not fully observable or not fully deter-
ministicnondeterministic an environment is nondeterministic if actions are characterized by their
possible outcomes, but no probabilities are attached to them
8/17/2019 Summary Artificial Intelligence 1 - RuG
21/24
episodic in an atomic task environment, the agent’s experience is divided intoatomic episodes, in which it receives a percept and then performs a singleaction, which does not depend on the actions taken in previous episodes.
sequential the current decision could affect all future decisions.static the environment does not change while the agent is deliberating
dynamic it doessemi-dynamic the environment does not change, but the performance score does
discrete finite number of distinct states and set of percepts and actions
continuous infinite number of distinct states or set of percepts or actionsknown the agent’s knowledge of all details of the environment represent the en-
tire environmentunknown it does not
2.4 The Structure of Agents
architecture the medium on which the agent program runssimple reflex agents these agents select actions based on the current percept, ignoring the rest
of the percept historycondition-action rules if-then rules
internal state keeping track of the previous percepts to reflect some unobserved aspectsof the current state
model some representation of the worldmodel-based agent an agent using a model
utility assigning values to different options to enable easier choosingutility function the function to optimize utilityexpected utility the realistic version of utility, for agents are not omniscient
learning element the element in a learning agent responsible for making improvements tothe performance element
performance element the element in a learning agent responsible for the actions the agent per-forms
critic the element in a learning agent responsible for the way the learning ele-
ment knows what actions are good and which are badproblem generator the element in a learning agent responsible for finding new and informa-
tive experiencesreward a form of direct (positive) feedback on the quality of the agent’s behavior
penalty a form of direct (negative) feedback on the quality of the agent’s behaviorexpressiveness expressive power
atomic representation each state of the world is indivisible—there is no internal structurefactored representation each state of the world is split up into a fixed set of variables or attributes,
each of which have a valuestructured
representation a representation which allows for explicit descriptions of relationships
between objects
3 Solving Problems by Searching
problem-solving agents a kind of goal-based agents using atomic representationplanning agents a kind of goal-based agents using factored or structured representation
uninformed algorithms algorithms that are given no information about the problem other than itsdefinition
informed algorithms algorithms that do receive additional information
3.1 Problem-Solving Agents
problem formulation the process of deciding what actions and states to consider, given a goal
8/17/2019 Summary Artificial Intelligence 1 - RuG
22/24
search the process of looking for a sequence of actions that reaches the goalexecution the phase that is entered once a solution is found
open-loop system the agent ignores the percepts during the execution of the solutionproblem the problem is defined by the initial state; the possible actions that are
applicable in a state s; the transition model; the goal test; and the pathcost function that assigns a numeric cost to each path.
transition model a description of what each action doesgoal test test to check whether the goal has been reached
path cost a function that assigns a numeric cost to each path.successor any applicable state
state space the combination of the initial state, actions and transition modelgraph directed network
path a sequence of states connected by a sequence of actionsstep cost the cost of taking action a in state s to reach state s’: c(s, a, s)solution a sequence of actions that leads from the initial state to a goal state
optimal solution a solution that has the lowest path cost among all solutionsabstraction the process of removing detail from a representation
3.2 Example Problems
toy problems a problem intended to illustrate or exercise various problem-solving meth-ods
incremental formulation a problem with solutions that apply in the real world
3.3 Searching for Solutions
search tree a representation of a solution to a problemexpansion the generation of new states from the current state
parent node a node that generates child nodeschild nodes nodes generated by a parent node
leaf nodes nodes with no childrenfrontier the set of all leaf nodesopen list frontier
search strategy the way an algorithm moves through the treerepeated state a state that occurs multiple times
loopy path a kind of repeated state that is cyclicalredundant paths a state that exists whenever there is more than one way to get from one
state to anotherexplored set the set of nodes that have been visited
closed list explored setqueue a data structure used to represent nodes
fifo queue first-in first-out queue: the oldest element is popped
lifo queue last-in first-out queue: the newest element is poppedstack lifo
priority queue the element of the queue with the highest priority according to some or-dering function is popped
canonical form a more general method of storing repeated states in canonical form, mean-ing that logically equivalent states should map to the same data structure
completeness does the algorithm always find a solution if one existsoptimality does the algorithm find the optimal solution, i.e. the solution with the
lowest path costtime complexity how long does it take to find a solution
space complexity how much memory is needed to perform the searchbranching factor b, the maximum number of successors of any node
depth d, the depth of the shallowest goal nodemaximum length m, the maximum length of any path in the state space
8/17/2019 Summary Artificial Intelligence 1 - RuG
23/24
search cost a cost that typically depends on time complexitytotal cost a cost that combines search cost and path cost
3.4 Uninformed Search Strategies
uninformed search searches that are not provided with any information besides the informa-tion in the problem definition
blind search uninformed searchinformed search searches that know whether one non-goal state is more promising than
anotherheuristic search informed search
breadth-first search a search algorithm in which, using a fifo queue, every node on a level isexpanded before the nodes on the next level are expanded, starting at theroot node
uniform-cost search is basically breadth-first search, but instead of expanding the shallowestnodes, it expands the node with the lowest path cost
depth-first search a search algorithm in which, using a lifo queue, which always expandsthe deepest node in the current frontier of the search tree, i.e. it getsdeeper in the tree until it finds a leaf node.
depth-limited search depth-first search, but limited to a depth lbacktracking search a variant of depth-first search which uses less memory; only one succes-
sor is generated at a time, and instead of copying the the current statedescription, it modifies the current
iterative-deepening
search iterative deepening search or iterative deepening depth-first search per-
forms depth-first, but initially explores only the root node, then alsosearches in the first layer, then the second, etc.
iterative lengthening iterative deepening, but using increasing path-cost limits instead of in-creasing depth limits. Not effective.
bidirectional search search process that basically performs two searches simultaneously; whenthe frontier meets, the solution is found
predecessors all states x that have x as a successor
3.5 Informed (Heuristic) Search Strategies
informed search a search strategy that uses problem-specific knowledge beyond the defi-nition of the problem
best-first search graph-search or tree-search in which a node is selected for expansion based on some evaluation function
heuristic function a component of the evaluation function that estimates cost of the cheapestpath from the state at node n to a goal state
greedy best-first search a search process that tries to expand the node that is closest to the goalstraight-line distance a heuristic that uses the distance between two points. can be used with
greedy best-first searchA* search a search process that evaluates odes by combining the cost to reach the
node ant the cost to get from the node to the goaladmissible heuristic a heuristic is admissible if the heuristic never overestimates the cost to
reach the goal
consistent for every node n and every successor nâ ˘ A ´ Z of n generated by any action a,the estimated cost of reaching the goal from n is no greater than the stepcost of getting to nâ ˘ A ´ Z plus the estimated cost of reaching the goal fromnâ ˘ A ´ Z: h(n) ≤ c(n, a, n) + h(n).
monotonous consistenttriangle inequality each side of a triangle cannot be longer than the sum of the other two
sides.contours
8/17/2019 Summary Artificial Intelligence 1 - RuG
24/24
pruning
optimally efficient
absolute error
relative error
total cost
3.6 Heuristic Functions
manhattan distance
city block distance
effective branching
factor
dominates
relaxed problem
pattern databases
disjoint pattern
databases
features