Top Banner
Artificial Intelligence Chapter1 Anurag Dixit ARTIFICIAL INTELLIGENCE 1. Introduction What is artificial intelligence? It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. What is intelligence? Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines. Natural Intelligence Definition. Intelligence – inter ligare (Latin) – the capacity of creating connections between notions. Wikipedia: the ability to solve problems. WordNet: the ability to comprehend; to understand and profit from experience. Complex use of creativity, talent, imagination. 1
45
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

ARTIFICIAL INTELLIGENCE

1 Introduction

What is artificial intelligence

It is the science and engineering of making intelligent machines especially intelligent

computer programs It is related to the similar task of using computers to understand

human intelligence but AI does not have to confine itself to methods that are biologically

observable

What is intelligence

Intelligence is the computational part of the ability to achieve goals in the world

Varying kinds and degrees of intelligence occur in people many animals and some

machines

Natural Intelligence

Definition Intelligence ndash inter ligare (Latin) ndash the capacity of creating

connections between notions

Wikipedia the ability to solve problems

WordNet the ability to comprehend to understand and profit from experience

Complex use of creativity talent imagination

Biology - Intelligence is the ability to adapt to new conditions and to successfully

cope with life situations

Psychology - a general term encompassing various mental abilities including the

ability to remember and use what one has learned in order to solve problems

adapt to new situations and understand and manipulate onersquos reality

Nonlinear non-predictable behavior

Dictionary Intelligence

1

Artificial Intelligence Chapter1Anurag Dixit

1 (a) The capacity to acquire and apply knowledge

(b) The faculty of thought and reason

(c) Superior powers of mind

2 An intelligent incorporeal being especially an angel

3 Information news

4 (a) Secret information especially about an actual or potential enemy

(b) An agency staff or office employed in gathering such information

(c) Espionage agents organizations and activities considered as a group

What is intelligence then

Fast thinking

Knowledge

Ability to pass as a human

Ability to reason logically

Ability to learn

Ability to perceive and act upon onersquos environment

Ability to play chess at grand-masterrsquos level

Why Study AI

AI helps computer scientists and engineers build more useful and user-friendly

computers

Psychologists linguists and philosophers understand the principles that constitute what

we call intelligence

AI is an interdisciplinary field of study Many ideas and techniques now standard in CS

(symbolic computation time sharing objects declarative programming ) were

pioneered by AI-related research

Dictionary Artificial Intelligence

2

Artificial Intelligence Chapter1Anurag Dixit

1 Dictionary 1

(a) The ability of a computer or other machine to perform those activities that are

normally thought to require intelligence

(b) The branch of computer science concerned with the development of machines having

this ability

2 Dictionary 2

The subfield of computer science concerned with the concepts and methods of symbolic

inference by computer and symbolic knowledge representation for use in making

inferences AI can be seen as an attempt to model aspects of human thought

on computers It is also sometimes defined as trying to solve by computer any problem

that a human can solve faster

Definition from R amp N book a program that

ndash Acts like human (Turing test)

ndash Thinks like human (human-like patterns of thinking steps)

ndash Acts or thinks rationally (logically correctly)

Definition The science of developing methods to solve problems usually associated

with human intelligence

ndash Alternate definitions

ndash building intelligent entities or agents

ndash making computers think or behave like humans

ndash studying the human thinking through computational models

ndash generating intelligent behavior reasoning learning

Ray Kurzweil on AI 1048708

ndash ldquoArtificial intelligence is the ability to perform a task that is normally

performed by natural intelligence particularly human natural

3

Artificial Intelligence Chapter1Anurag Dixit

intelligencerdquo 1048708 (or in some cases tasks that require greater-than-human

intelligence)

John McCarthy on AI 1048708

ldquoIt is the science and engineering of making intelligent machines especially intelligent

computer programsrdquo 1048708

ldquoIntelligence is the computational part of the ability to achieve goals in the worldrdquo

Operational Definition of AI

Systems that act like humans

Turing test

Systems that think like humans

Cognitive Science

Systems that think rationally

Logic-based AI

Systems that act rationally

Rational Agents

Thinking Rationally Laws of Thought

Several Greek schools at the time of Aristotle developed various forms of logic

Notation and rules of derivation for thoughts they may or may not have proceeded to the

idea of mechanization Direct line through mathematics and philosophy to modern AI

Problems

1 Not all intelligent behavior is mediated by logical deliberation

2 What is the purpose of thinking What thoughts should I have

Thinking Humanly Cognitive Science

1960s ldquocognitive revolutionrdquo information-processing psychology replaced prevailing

orthodoxy of behaviorism

4

Artificial Intelligence Chapter1Anurag Dixit

Require scientific theories of internal activities of the brain

What level of abstraction ldquoKnowledgerdquo or ldquocircuitsrdquo

How to validate It requires

1 Predicting and testing behavior of human subjects (top-down)

2 Direct identification from neurological data (bottom-up)

Both approaches Cognitive Science and Cognitive Neuroscience share with AI on the

available theories do not explain (or engender) anything resembling human-level general

intelligence

Rational Agents

An agent is an entity that perceives and acts

This course is about designing rational agents

Abstractly an agent is a function from percept histories to actions

For any given class of environments and tasks we seek the agent (or class of agents) with

the best performance

Caveat computational limitations make perfect rationality unachievable

Approach design best program for given machine resources

Acting Rationally

Rational behavior doing the right thing that which is expected to maximize goal

achievement given the available information

Doesnrsquot necessarily involve thinkingmdasheg blinking reflexmdashbut thinking should be in

the service of rational action

Aristotle Every art and every inquiry and similarly every action and pursuit is thought

to aim at some good

Artificial Intelligence Vs Conventional Programming

5

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 2: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

1 (a) The capacity to acquire and apply knowledge

(b) The faculty of thought and reason

(c) Superior powers of mind

2 An intelligent incorporeal being especially an angel

3 Information news

4 (a) Secret information especially about an actual or potential enemy

(b) An agency staff or office employed in gathering such information

(c) Espionage agents organizations and activities considered as a group

What is intelligence then

Fast thinking

Knowledge

Ability to pass as a human

Ability to reason logically

Ability to learn

Ability to perceive and act upon onersquos environment

Ability to play chess at grand-masterrsquos level

Why Study AI

AI helps computer scientists and engineers build more useful and user-friendly

computers

Psychologists linguists and philosophers understand the principles that constitute what

we call intelligence

AI is an interdisciplinary field of study Many ideas and techniques now standard in CS

(symbolic computation time sharing objects declarative programming ) were

pioneered by AI-related research

Dictionary Artificial Intelligence

2

Artificial Intelligence Chapter1Anurag Dixit

1 Dictionary 1

(a) The ability of a computer or other machine to perform those activities that are

normally thought to require intelligence

(b) The branch of computer science concerned with the development of machines having

this ability

2 Dictionary 2

The subfield of computer science concerned with the concepts and methods of symbolic

inference by computer and symbolic knowledge representation for use in making

inferences AI can be seen as an attempt to model aspects of human thought

on computers It is also sometimes defined as trying to solve by computer any problem

that a human can solve faster

Definition from R amp N book a program that

ndash Acts like human (Turing test)

ndash Thinks like human (human-like patterns of thinking steps)

ndash Acts or thinks rationally (logically correctly)

Definition The science of developing methods to solve problems usually associated

with human intelligence

ndash Alternate definitions

ndash building intelligent entities or agents

ndash making computers think or behave like humans

ndash studying the human thinking through computational models

ndash generating intelligent behavior reasoning learning

Ray Kurzweil on AI 1048708

ndash ldquoArtificial intelligence is the ability to perform a task that is normally

performed by natural intelligence particularly human natural

3

Artificial Intelligence Chapter1Anurag Dixit

intelligencerdquo 1048708 (or in some cases tasks that require greater-than-human

intelligence)

John McCarthy on AI 1048708

ldquoIt is the science and engineering of making intelligent machines especially intelligent

computer programsrdquo 1048708

ldquoIntelligence is the computational part of the ability to achieve goals in the worldrdquo

Operational Definition of AI

Systems that act like humans

Turing test

Systems that think like humans

Cognitive Science

Systems that think rationally

Logic-based AI

Systems that act rationally

Rational Agents

Thinking Rationally Laws of Thought

Several Greek schools at the time of Aristotle developed various forms of logic

Notation and rules of derivation for thoughts they may or may not have proceeded to the

idea of mechanization Direct line through mathematics and philosophy to modern AI

Problems

1 Not all intelligent behavior is mediated by logical deliberation

2 What is the purpose of thinking What thoughts should I have

Thinking Humanly Cognitive Science

1960s ldquocognitive revolutionrdquo information-processing psychology replaced prevailing

orthodoxy of behaviorism

4

Artificial Intelligence Chapter1Anurag Dixit

Require scientific theories of internal activities of the brain

What level of abstraction ldquoKnowledgerdquo or ldquocircuitsrdquo

How to validate It requires

1 Predicting and testing behavior of human subjects (top-down)

2 Direct identification from neurological data (bottom-up)

Both approaches Cognitive Science and Cognitive Neuroscience share with AI on the

available theories do not explain (or engender) anything resembling human-level general

intelligence

Rational Agents

An agent is an entity that perceives and acts

This course is about designing rational agents

Abstractly an agent is a function from percept histories to actions

For any given class of environments and tasks we seek the agent (or class of agents) with

the best performance

Caveat computational limitations make perfect rationality unachievable

Approach design best program for given machine resources

Acting Rationally

Rational behavior doing the right thing that which is expected to maximize goal

achievement given the available information

Doesnrsquot necessarily involve thinkingmdasheg blinking reflexmdashbut thinking should be in

the service of rational action

Aristotle Every art and every inquiry and similarly every action and pursuit is thought

to aim at some good

Artificial Intelligence Vs Conventional Programming

5

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 3: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

1 Dictionary 1

(a) The ability of a computer or other machine to perform those activities that are

normally thought to require intelligence

(b) The branch of computer science concerned with the development of machines having

this ability

2 Dictionary 2

The subfield of computer science concerned with the concepts and methods of symbolic

inference by computer and symbolic knowledge representation for use in making

inferences AI can be seen as an attempt to model aspects of human thought

on computers It is also sometimes defined as trying to solve by computer any problem

that a human can solve faster

Definition from R amp N book a program that

ndash Acts like human (Turing test)

ndash Thinks like human (human-like patterns of thinking steps)

ndash Acts or thinks rationally (logically correctly)

Definition The science of developing methods to solve problems usually associated

with human intelligence

ndash Alternate definitions

ndash building intelligent entities or agents

ndash making computers think or behave like humans

ndash studying the human thinking through computational models

ndash generating intelligent behavior reasoning learning

Ray Kurzweil on AI 1048708

ndash ldquoArtificial intelligence is the ability to perform a task that is normally

performed by natural intelligence particularly human natural

3

Artificial Intelligence Chapter1Anurag Dixit

intelligencerdquo 1048708 (or in some cases tasks that require greater-than-human

intelligence)

John McCarthy on AI 1048708

ldquoIt is the science and engineering of making intelligent machines especially intelligent

computer programsrdquo 1048708

ldquoIntelligence is the computational part of the ability to achieve goals in the worldrdquo

Operational Definition of AI

Systems that act like humans

Turing test

Systems that think like humans

Cognitive Science

Systems that think rationally

Logic-based AI

Systems that act rationally

Rational Agents

Thinking Rationally Laws of Thought

Several Greek schools at the time of Aristotle developed various forms of logic

Notation and rules of derivation for thoughts they may or may not have proceeded to the

idea of mechanization Direct line through mathematics and philosophy to modern AI

Problems

1 Not all intelligent behavior is mediated by logical deliberation

2 What is the purpose of thinking What thoughts should I have

Thinking Humanly Cognitive Science

1960s ldquocognitive revolutionrdquo information-processing psychology replaced prevailing

orthodoxy of behaviorism

4

Artificial Intelligence Chapter1Anurag Dixit

Require scientific theories of internal activities of the brain

What level of abstraction ldquoKnowledgerdquo or ldquocircuitsrdquo

How to validate It requires

1 Predicting and testing behavior of human subjects (top-down)

2 Direct identification from neurological data (bottom-up)

Both approaches Cognitive Science and Cognitive Neuroscience share with AI on the

available theories do not explain (or engender) anything resembling human-level general

intelligence

Rational Agents

An agent is an entity that perceives and acts

This course is about designing rational agents

Abstractly an agent is a function from percept histories to actions

For any given class of environments and tasks we seek the agent (or class of agents) with

the best performance

Caveat computational limitations make perfect rationality unachievable

Approach design best program for given machine resources

Acting Rationally

Rational behavior doing the right thing that which is expected to maximize goal

achievement given the available information

Doesnrsquot necessarily involve thinkingmdasheg blinking reflexmdashbut thinking should be in

the service of rational action

Aristotle Every art and every inquiry and similarly every action and pursuit is thought

to aim at some good

Artificial Intelligence Vs Conventional Programming

5

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 4: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

intelligencerdquo 1048708 (or in some cases tasks that require greater-than-human

intelligence)

John McCarthy on AI 1048708

ldquoIt is the science and engineering of making intelligent machines especially intelligent

computer programsrdquo 1048708

ldquoIntelligence is the computational part of the ability to achieve goals in the worldrdquo

Operational Definition of AI

Systems that act like humans

Turing test

Systems that think like humans

Cognitive Science

Systems that think rationally

Logic-based AI

Systems that act rationally

Rational Agents

Thinking Rationally Laws of Thought

Several Greek schools at the time of Aristotle developed various forms of logic

Notation and rules of derivation for thoughts they may or may not have proceeded to the

idea of mechanization Direct line through mathematics and philosophy to modern AI

Problems

1 Not all intelligent behavior is mediated by logical deliberation

2 What is the purpose of thinking What thoughts should I have

Thinking Humanly Cognitive Science

1960s ldquocognitive revolutionrdquo information-processing psychology replaced prevailing

orthodoxy of behaviorism

4

Artificial Intelligence Chapter1Anurag Dixit

Require scientific theories of internal activities of the brain

What level of abstraction ldquoKnowledgerdquo or ldquocircuitsrdquo

How to validate It requires

1 Predicting and testing behavior of human subjects (top-down)

2 Direct identification from neurological data (bottom-up)

Both approaches Cognitive Science and Cognitive Neuroscience share with AI on the

available theories do not explain (or engender) anything resembling human-level general

intelligence

Rational Agents

An agent is an entity that perceives and acts

This course is about designing rational agents

Abstractly an agent is a function from percept histories to actions

For any given class of environments and tasks we seek the agent (or class of agents) with

the best performance

Caveat computational limitations make perfect rationality unachievable

Approach design best program for given machine resources

Acting Rationally

Rational behavior doing the right thing that which is expected to maximize goal

achievement given the available information

Doesnrsquot necessarily involve thinkingmdasheg blinking reflexmdashbut thinking should be in

the service of rational action

Aristotle Every art and every inquiry and similarly every action and pursuit is thought

to aim at some good

Artificial Intelligence Vs Conventional Programming

5

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 5: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Require scientific theories of internal activities of the brain

What level of abstraction ldquoKnowledgerdquo or ldquocircuitsrdquo

How to validate It requires

1 Predicting and testing behavior of human subjects (top-down)

2 Direct identification from neurological data (bottom-up)

Both approaches Cognitive Science and Cognitive Neuroscience share with AI on the

available theories do not explain (or engender) anything resembling human-level general

intelligence

Rational Agents

An agent is an entity that perceives and acts

This course is about designing rational agents

Abstractly an agent is a function from percept histories to actions

For any given class of environments and tasks we seek the agent (or class of agents) with

the best performance

Caveat computational limitations make perfect rationality unachievable

Approach design best program for given machine resources

Acting Rationally

Rational behavior doing the right thing that which is expected to maximize goal

achievement given the available information

Doesnrsquot necessarily involve thinkingmdasheg blinking reflexmdashbut thinking should be in

the service of rational action

Aristotle Every art and every inquiry and similarly every action and pursuit is thought

to aim at some good

Artificial Intelligence Vs Conventional Programming

5

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 6: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Artificial Intelligence Conventional Programming

Artificial Intelligencea primarily symbolicb heuristic search(solution steps implicit)c control structure usuallyseparate from domain knowledged usually easy to modifyupdate and enlargee some incorrect answersoften tolerablef satisfactory answers usually

acceptable

Conventional computer programminga algorithmic(solutions steps explicit)b primarily numericc information and controlintegrated togetherd difficult to modifye correct answers requiredf best possible solution

usually sought

Foundations of AI

(a) Computer Science amp Engineering

Computer hardware and Software

(b) Philosophy

Rule of Reasoning

(c) Biology

Human animals brain activity

(d) Linguistics

communication

(e) Cognitive Science

High level humananimal thinking

(f) Psychology

Complex systems game

(g) Economics

Cost benefits ratio

6

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 7: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

(h) MathematicsLogic algorithm optimization

AI Prehistory

Philosophy logic methods of reasoning mind as physical system foundations of learning

language rationality

Mathematics formal representation and proof algorithms computation (un)decidability

(in)tractability probability

Psychology adaptation phenomena of perception and motor control experimental

techniques (psychophysics etc)

Economics formal theory of rational decisions

Linguistics knowledge representation grammar

Neuroscience plastic physical substrate for mental activity

Control theory homeostatic systems stability simple optimal agent designs

History of AI

bull The birth of AI (1943 ndash 1956)

ndash Pitts and McCulloch (1943) simplified mathematical model of neurons

(restingfiring states) can realize all propositional logic primitives (can

compute all Turing computable functions)

ndash Allen Turing Turing machine and Turing test (1950)

ndash Claude Shannon information theory early game theory possibility of

chess playing computers

ndash Tracing back to Boole Aristotle Euclid (logics syllogisms algebra of

symbols)

bull Early enthusiasm (1952 ndash 1969)

ndash 1956 Dartmouth conference

John McCarthy (Lisp)

7

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 8: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Marvin Minsky (first neural network machine)

Alan Newell and Herbert Simon (GPS)

ndash Emphasize on intelligent general problem solving

Heuristics of human problem solving (means-ends analysis in GPS )

Resolution by John Robinson (basis for automatic theorem proving)

heuristic search (A AO game tree search)

bull Emphasis on knowledge (1966 ndash 1974)

ndash domain specific knowledge is the key to overcome existing difficulties

ndash knowledge representation (KR) paradigms

ndash declarative vs procedural representation

bull Knowledge-based systems (1969 ndash 1979)

ndash DENDRAL the first knowledge intensive system (determining 3D

structures of complex chemical compounds)

ndash MYCIN first rule-based expert system (containing 450 rules for

diagnosing blood infectious diseases)

EMYCIN an ES shell

ndash PROSPECTOR first knowledge-based system that made significant profit

(geological ES for mineral deposits)

ndash AI became an industry (1980 ndash 1989)

ndash wide applications in various domains

ndash commercially available tools

ndash Current trends (1990 ndash present)

ndash more realistic goals

ndash more practical (application oriented)

8

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 9: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

ndash resurgence of neural networks and emergence of genetic algorithms

ndash distributed AI intelligent agents and semantic web

Possible Approaches

Planning and Problem Solving

In the AI literature planning refers to determining a sequence of actions you know how

to perform that will achieve a particular objective Problem solving is finding a plan for a

task in an abstract domain A problem is hard if you do not know how to work out an

appropriate sequence of steps and is solved once such a sequence has been found actual

execution is irrelevant

Unlike many areas of AI planning shows a clear line of researchers building on each

others work

Think

Act

Like humans Well

GPS

Turing test Eliza

Rational

agents

Heuristic

systems

AI tends to work mostly in this area

9

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 10: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

I SOME BASIC IDEAS

To avoid having to cope with the complexities of the physical world much early work in

AI was directed toward abstract activities such as proving theorems playing games like

chess and checkers or solving puzzles To illustrate the discussion in this section we will

use the Tower of Hanoi (Hanoi) a puzzle involving three pegs on which disks can be

placed and a set of disks of varying size

The disks only can be moved one at a time between the pegs and a disk must

never be stacked on top of a smaller one The problem is to transport the entire stack of

disks to another peg Reasoning about such a problem obviously requires representing

states of the world and having some way of specifying the objective or goal These

representations must be rich enough to embody all the aspects of the world that will be

reasoned about In particular since planning is about changing things every property that

might be affected must be represented as dependent on time in some way For Hanoi this

requires only an ability to represent sets of disks positions as the disks are initially as

they are required to be eventually and as they may be in between A planner also needs

to represent what can be donemdashsuch as the moves that can be made as determined by the

nature of the game or puzzle There is a fundamental difference between an agent

executing an action and thus affecting the world and a planning system manipulating

representations to derive information about doing so which we call applying an operator

10

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 11: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Introduction to Predicate calculus

The Propositional Calculus

Symbols and Sentences

The propositional calculus and in the next subsection the predicate calculus are first of

all languages Using their words phrases and sentences we can represent and reason

about properties and relationships in the world The first step in describing a language is

to introduce the pieces that make it up its set of symbols

PROPOSITIONAL CALCULUS SYMBOLS

The symbols of propositional calculus are the propositional symbols

P Q R S

truth symbolstrue false

and connectives V - ~ =

Propositional symbols denote propositions or statements about the world that may be

either true or false such as the car is red or water is wet Propositions are denoted by

uppercase letters near the end of the English alphabet Sentences in the propositional

calculus are fanned from these atomic symbols according to the following rules

PROPOSITIONAL CALCULUS SENTENCES

Every propositional symbol and truth symbol is a sentence

For example true P Q and R are sentences

The negation of a sentence is a sentence

For example -- P and - false are sentences

The conjunction or and of two sentences is a sentence

For example P - P is a sentence

The disjunction or or of two sentences is a sentence

For example P v -- P is a sentence

The implication of one sentence from another is a sentence

11

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 12: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

For example P -7 Q is a sentence

The equivalence of two sentences is a sentence

For example P v Q = R is a sentence

Legal sentences are also called well-formed formulas or WFFs

In expressions of the form P A Q P and Q are called the conjuncts In P v Q P and Q are

referred to as disjuncts In an implication P -7 Q P is the premise or antecedent and Q

the conclusion or consequent

In propositional calculus sentences the symbols ( ) and [ ] are used to group symbols nto

sub expressions and so to control their order of evaluation and meaning

The Semantics of the Propositional Calculus

In this section we formally define the semantics or meaning of these sentences

Because AI programs must reason with their representational structures it is important to

demonstrate that the truth of their conclusions depends only on the truth of their initial

knowledge ie that logical errors are not introduced by the inference procedures A

precise treatment of semantics is essential to this goal

A proposition symbol corresponds to a statement about the world For example P may

denote the statement it is raining or Q the statement I live in a brown house A

proposition may be either true or false given some state of the world The truth value

assignment to propositional sentences is called an interpretation an assertion about their

truth in some possible world

Formally an interpretation is a mapping from the propositional symbols into the set

T F

PROPOSITIONAL CALCULUS SEMANTICS

An interpretation of a set of propositions is the assignment of a truth value either T

12

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 13: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

or F to each propositional symbol The symbol true is always assigned T and the symbol

false is assigned F

Predicate Calculus

In propositional calculus each atomic symbol (P 0 etc) denotes a proposition of some

complexity There is no way to access the components of an individual assertion

Predicate calculus provides this ability For example instead of letting a single

propositional symbol

P denote the entire sentence it rained on Tuesday we can create a predicate weather

that describes a relationship between a date and the weather weather (tuesday rain)

Through inference rules we can manipulate predicate calculus expressions accessing

their individual components and inferring new sentences

Predicate calculus also allows expressions to contain variables Variables let us create

general assertions about classes of entities For example we could state that for all values

of X where Xis a day of the week the statement weather(X rain) is true ie it rains

every day As with propositional calculus we will first define the syntax of the language

and then discuss its semantics

The Syntax of Predicates and Sentences

Before defining the syntax of correct expressions in the predicate calculus we define an

alphabet and grammar for creating the symbols of the language This corresponds to the

lexical aspect of a programming language definition Predicate calculus symbols like the

tokens in a programming language are irreducible syntactic elements they cannot be

broken into their component parts by the operations of the language

In this text we represent predicate calculus symbols as strings of letters and digits

beginning with a letter Blanks and non alphanumeric characters cannot appear within the

string although the underscore _ may be used to improve readability

PREDICATE CALCULUS SYMBOLS

The alphabet that makes up the symbols of the predicate calculus consists of

13

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 14: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

1 The set of letters both upper- and lowercase of the English alphabet

2 The set of digits 0 1 9

3 The underscore _

Symbols in the predicate calculus begin with a letter and are followed by any sequence of

these legal characters

Legitimate characters in the alphabet of predicate calculus symbols include

aR69p_z

Examples of characters not in the alphabet include

amp

Parentheses ( ) commas and periods are used solely to construct well-formed

expressions and do not denote objects or relations in the world These are called improper

symbols

Predicate calculus symbols may represent either variables constants functions or

predicates Constants name specific objects or properties in the world Constant symbols

must begin with a lowercase letter Predicate calculus also allows functions on objects in

the world of discourse Function symbols (like constants) begin with a lowercase letter

Functions denote a mapping of one or more elements in a set (called the domain of the

function) into a unique element of another set (the range of the function) Elements of the

domain and range are objects in k world of discourse In addition to common arithmetic

functions such as addition and multiplication functions may define mappings between

nonnumeric domains

SYMBOLS and TERMS

Predicate calculus symbols include

1 Truth symbols true and false (these are reserved symbols)

2 Constant symbols are symbol expressions having the first character lowercase

3 Variable symbols are symbol expressions beginning with an uppercase character

14

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 15: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

4 Function symbols are symbol expressions having the first character lowercase

Functions have an attached arity indicating the number of elements of the domain

mapped onto each element of the range

A function expression consists of a function constant of atiry n followed by n terms

t1 t2 helliphellip tn enclosed in parentheses and separated by commas

A predicate calculus term is either a constant variable or function expression Thus a

predicate calculus term may be used to denote objects and properties in a problem

domain Examples of terms are

PREDICATES and ATOMIC SENTENCES

Predicate symbols are symbols beginning with a lowercase letter

Predicates have an associated positive integer referred to as the arity or argument

number for the predicate Predicates with the same name but different arities are

considered distinct

An atomic sentence is a predicate constant of arity n followed by n terms

t 1 t enclosed in parentheses and separated by commas

The truth values true and false are also atomic sentences

Atomic sentences are also called atomic expressions atoms or propositions

We may combine atomic sentences using logical operators to form sentences in the

predicate calculus These are the same logical connectives used in propositional calculus

A v - ---7 and =

When a variable appears as an argument in a sentence it refers to unspecified objects

in the domain First order (Section 222) predicate calculus includes two symbols the

variable quantifiers V and 3 that constrain the meaning of a sentence containing a

variable A quantifier is followed by a variable and a sentence such as

3 Y friends(Y peter)

15

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 16: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

V X likes(X ice_cream)

The universal quantifier v indicates that the sentence is true for all values of the

variable

In the example V X likes(X ice_cream) is true for all values in the domain of the

definition of X The existential quantifier 3 indicates that the sentence is true for at least

one value in the domain 3 Y friends(Y peter) is true if there is at least one object

indicated by Y that is a friend of peter

Sentences in the predicate calculus are defined inductivefy

PREDICATE CALCULUS SENTENCES

Every atomic sentence is a sentence

1 If 5 is a sentence then so is its negation --- 5

2 If s and 52 are sentences then so is their conjunction 5 j 1 52

3 If 81 and 82 are sentences then so is their disjunction s v 52

4 If 51 and Sz are sentences then so is their implication 51 ~ 82

LISP

Overview

bull Lisp stands for ldquoLISt Processrdquo

ndash Invented by John McCarthy (1958)

ndash Simple data structure (atoms and lists)

ndash Heavy use of recursion

ndash Prefix notation of expressions

ndash Interpretive language

bull Why Lisp

ndash It is the most widely used AI programming language

16

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 17: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

ndash It is good for writing production software

ndash It is especially good for prototyping

ndash It has got lots of features other languages donrsquot

ndash You can write new programs and extend old programs really really

quickly in Lisp

Table of Contents

Symbols Numbers Conses Lists Functions Printing Forms and the Top-level Loop Special Forms Binding Dynamic Scoping Arrays Strings Structures Setf Booleans and Conditionals Iteration Non-local Exits Funcall Apply and Mapcar Lambda Sorting Equality Some Useful List Functions Getting Started with Emacs Further Information

Symbols

17

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 18: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

A symbol is just a string of characters There are restrictions on what you can include in a symbol and what the first character can be but as long as you stick to letters digits and hyphens youll be safe (Except that if you use only digits and possibly an initial hyphen LISP will think you typed an integer rather than a symbol) Some examples of symbols a b c1 foo bar baaz-quux-garply

Some things you can do with symbols follow (Things in bold after a gt prompt are what

you type to the LISP interpreter while other things are what the LISP interpreter prints

back to you The is LISPs comment character everything from a to the end of line is

ignored)

gt (setq a 5) store a number as the value of a symbol5gt a take the value of a symbol5gt (let ((a 6)) a) bind the value of a symbol temporarily to 66gt a the value returns to 5 once the let is finished5gt (+ a 6) use the value of a symbol as an argument to a function11gt b try to take the value of a symbol which has no valueError Attempt to take the value of the unbound symbol B

t and nil

There are two special symbols t and nil The value of t is defined always to be t and the value of nil is defined always to be nil LISP uses t and nil to represent true and false An example of this use is in the if statement described more fully later gt (if t 5 6)5gt (if nil 5 6)6gt (if 4 5 6)5

keyword

The last example is odd but correct nil means false and anything else means true (Unless we have a reason to do otherwise we use t to mean true just for the sake of

18

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 19: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

clarity) Symbols like t and nil are called self-evaluating symbols because they evaluate to themselves There is a whole class of self-evaluating symbols called keywords any symbol whose name starts with a colon is a keyword (See below for some uses for keywords) Some examples gt this-is-a-keywordTHIS-IS-A-KEYWORDgt so-is-thisSO-IS-THISgt me-tooME-TOO

Numbers

An integer is a string of digits optionally preceded by + or - A real number looks like an integer except that it has a decimal point and optionally can be written in scientific notation A rational looks like two integers with a between them LISP supports complex numbers which are written c(r i) (where r is the real part and i is the imaginary part) A number is any of the above Here are some numbers 5 17 -34 +6 31415 1722e-15 c(1722e-15 075)

The standard arithmetic functions are all available + - floor ceiling mod sin cos tan sqrt exp expt and so forth All of them accept any kind of number as an argument + - and return a number according to type contagion an integer plus a rational is a rational a rational plus a real is a real and a real plus a complex is a complex Here are some examples gt (+ 3 34) type contagion154 gt (exp 1) e27182817 gt (exp 3) eee20085537 gt (expt 3 42) exponent with a base other than e10090418gt (+ 5 6 7 ( 8 9 10)) the fns +- all accept multiple arguments738

There is no limit to the absolute value of an integer except the memory size of your

computer Be warned that computations with bignums (as large integers are called) can

be slow (So can computations with rationals especially compared to the corresponding

computations with small integers or floats)

19

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 20: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Subfields of AI

The subfields of artificial intelligence can be classified in terms of their role in either

perception reasoning or actuation

bull Perception

ndash computer vision

ndash natural language processing

bull Reasoning (ie problem solving) mapping from percepts to actuators

ndash automated reasoning

ndash knowledge representation

ndash search and optimization

ndash decisiongame theory

ndash machine learning

bull Actuation

ndash robotics

ndash softbotics

Intelligent Agents

Intelligent Agents

The primary goal of (weak) artificial intelligence is to build intelligent entities A related

(but not a necessary) goal is to understand intelligent entities and perhaps even to

understand and engineer human intelligence (strong AI)

Modern AI can be characterized as the engineering of rational agents An agent is an

entity that (i) perceives (ii) reasons and (iii) acts In computational terms that which is

perceived is an input to reason is to compute to act is to output the result of

computation Typically an agent is equipped with objectives A rational agent is one that

acts optimally with respect to its objectives Agents are often distinguished from typical

20

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 21: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

computational processes by their autonomymdashthey operate without direct human

intervention In addition agents are reactivemdashthey perceive their environments and

attempt to respond in a timely manner to changing conditionsmdashand proactivemdashtheir

behavior is goal directed rather than simply response-driven

Autonomous agents may be rule-based goal-based or utility-based Rule-based agents

operate according to hard-coded sets of rules like ELIZA A goal-based agent acts so as

to achieve its goals by planning a path from its current state to a goal state like GPS or

theorem provers Utility-based agents distinguish between goals based on utilities that

are associated with goal states

Agent Sensors Actuators

Human

Robotic

Software

Senses

Cameras

Bit Strings

Arms Legs

Motors Wheels

Bit Strings

Figure Intelligent Agent

21

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 22: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Agent environments may be at least partially characterized as follows

bull Deterministic vs Nondeterministic is the next state predictable (eg chess) or is there

uncertainty about state transitions (eg backgammon)

bull Discrete vs Continuous can the environment be described in discrete terms (eg

chess) or is the environment continuous (eg driving)

bull Static vs Dynamic is the environment static (eg chess) or can it change while the

agent is reasoning about its plan of action (eg driving)

bull Sequential vs One-shot does the agent need to reason about the future impact of its

immediate actions (eg chess) or can it treat each action independently (eg

Rochambeau)

bull Single agent vs Multiagent can we assume the agent operating alone in its

environment or need it explicitly reason about the actions of other agents (eg chess

backgammon Rochambeau driving)

An agent is a system that perceives its environment through sensors and acts upon that environment through effectors

22

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 23: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

An agent can be seen as a mapping between percept sequences and actionsAgent Percept rarr ActionThe less an agents relies on its built-in knowledge as opposed to the current percept sequence the more autonomous it isA rational agent is an agent whose acts try to maximize some performance measure

Example Vacuum-cleaner

Percepts location and contents eg [ADirty]Actions Left Right Suck NoOp

function REFLEX-VACUUM-AGENT( [locationstatus]) returns

actionif status = Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Intelligent Agents

Agent entity in a program or environment capable of generating action An agent uses perception of the environment to make decisions about actions to

take The perception capability is usually called a sensor The actions can depend on the most recent perception or on the entire history

(percept sequence)

Agent Function

The agent function is a mathematical function that maps a sequence of perceptions into action

The function is implemented as the agent program The part of the agent taking an action is called an actuator environment -gt sensors -gt agent function -gt actuators -gt environment

23

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 24: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Agent Classification

(a)Table-driven agents

ndash use a percept sequenceaction table in memory to find the next action They are implemented by a (large) lookup table

(b) Simple reflex agents

ndash are based on condition-action rules implemented with an appropriate production system They are stateless devices which do not have memory of past world states

(c) Agents with memory

ndash have internal state which is used to keep track of past states of the world

(d) Agents with goals

ndash are agents that in addition to state information have goal information that describes desirable situations Agents of this kind take future events into consideration

(e) Utility-based agents

24

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 25: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

ndash base their decisions on classic axiomatic utility theory in order to act rationally

Rational Agent

A rational agent is one that can take the right decision in every situation Performance measure a set of criteriatest bed for the success of the agents

behavior The performance measures should be based on the desired effect of the agent on

the environment

Rationality

The agents rational behavior depends on the performance measure that defines success the agents knowledge of the environment the action that it is capable of performing the current sequence of perceptions Definition for every possible percept sequence the agent is expected to take an

action that will maximize its performance measure

Agent Autonomy

An agent is omniscient if it knows the actual outcome of its actions Not possible in practice

An environment can sometimes be completely known in advance Exploration sometimes an agent must perform an action to gather information (to

increase perception) Autonomy the capacity to compensate for partial or incorrect prior knowledge

(usually by learning)

Environment

Task environment - the problem that the agent is a solution to Properties Observable - fully or partially A fully observable environment needs less representation Deterministic or stochastic Strategic -deterministic except for the actions of other agents

25

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 26: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Environment

Episodic or sequential Sequential - future actions depend on the previous ones Episodic - individual unrelated tasks for the agent to solve Static - dynamic Discrete - continuous Single agent - multi agent Multiple agents can be competitive or cooperative

More Definitions of Agents

An agent is a persistent software entity dedicated to a specific purpose (Smith Cypher and Spohrer 94 )

Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy and in so doing employ some knowledge or representation of the users goals or desires (IBM)

Intelligent agents continuously perform three functions perception of dynamic conditions in the environment action to affect conditions in the environment and reasoning to interpret perceptions solve problems draw inferences and determine actions (Hayes-Roth 94)

Agent vs Program

Size - an agent is usually smaller than a program Purpose - an agent has a specific purpose while programs are multi-functional Persistence - an agents life span is not entirely dependent on a user launching and

quitting it Autonomy - an agent doesnt need the users input to function

Simple Agents

Table-driven agents the function consists in a lookup table of actions to be taken for every possible state of the environment

If the environment has n variables each with t possible states then the table size is tn

Only works for a small number of possible states for the environment Simple reflex agents deciding on the action to take based only on the current

perception and not on the history of perceptions Based on the condition-action rule

(if (condition) action) Works if the environment is fully observable

Table driven Lisp version(defun table_agent (percept)

26

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 27: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

(let ((action t)) (push percept percepts) (setq action (lookup percepts table)) action))

(defun reflex_agent (percept) (let ((rule t) (state t) (action t)) (setq state (interpret percept)) (setq rule (match state)) (setq action (decision rule)) action))

Table drives Python versionpercepts = []table = def table_agent (percept) action = True perceptsappend(percept) action = lookup(percepts table) return action

def reflex_agent (percept) state = interpret(percept) rule = match(state) action = decision(rule) return action

Model-Based Reflex Agents

If the world is not fully observable the agent must remember observations about the parts of the environment it cannot currently observe

This usually requires an internal representation of the world (or internal state) Since this representation is a model of the world we call this model-based agent

Reflex agents Lisp version(setq state t) the world model(setq action nil) latest action(defun model_reflex_agent (percept) (let ((rule t)) (setq state (update_state state action percept)) (setq rule (match state)) (setq action (decision rule)) action))

27

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 28: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Reflex agents Python versionstate = True the world modelaction = False latest action

def model_reflex_agent (percept) state = update_state(state action percept) rule = match(state) action = decision(rule) return action

Goal-Driven Agents

The agent has a purpose and the action to be taken depends on the current state and on what it tries to accomplish (the goal)

In some cases the goal is easy to achieve In others it involves planning sifting through a search space for possible solutions developing a strategy

Utility-based agents the agent is aware of a utility function that estimates how close the current state is to the agents goal

Choose actions so as to achieve a (given or computed) goal A goal is a description of a desirable situation Keeping track of the current state is often not enough - need to add goals to decide

which situations are good Deliberative instead of reactive May have to consider long sequences of possible actions before deciding if goal is

achieved ndash involves consideration of the future ldquowhat will happen if I dordquo

28

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 29: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Learning Agents

Agents capable of acquiring new competence through observations and actions Components learning element (modifies the performance element) performance element (selects actions) feedback element (critic) exploration element (problem generator)

Table-driven agents

bull Table lookup of percept-action pairs mapping from every possible perceived state to the optimal action for that state

bull Problems ndash Too big to generate and to store (Chess has about 10120 states for

example) ndash No knowledge of non-perceptual parts of the current state ndash Not adaptive to changes in the environment requires entire table to be

updated if changes occur ndash Looping Canrsquot make actions conditional on previous actionsstates

Simple reflex agents

29

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 30: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

bull Rule-based reasoning to map from percepts to optimal action each rule handles a collection of perceived states

bull Problems ndash Still usually too big to generate and to storendash Still no knowledge of non-perceptual parts of state ndash Still not adaptive to changes in the environment requires collection of

rules to be updated if changes occur ndash Still canrsquot make actions conditional on previous state

Utility-based agents

bull When there are multiple possible alternatives how to decide which one is best bull A goal specifies a crude distinction between a happy and unhappy state but often

need a more general performance measure that describes ldquodegree of happinessrdquobull Utility function U State reg Reals indicating a measure of success or happiness

when at a given statebull Allows decisions comparing choice between conflicting goals and choice

between likelihood of success and importance of goal (if achievement is uncertain)

30

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 31: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Agents with memory

bull Encode ldquointernal staterdquo of the world to remember the past as contained in earlier percepts

bull Needed because sensors do not usually give the entire state of the world at each input so perception of the environment is captured over time ldquoStaterdquo is used to encode different world states that generate the same immediate percept

bull Requires ability to represent change in the world one possibility is to represent just the latest state but then canrsquot reason about hypothetical courses of action

31

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 32: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

Other Types of Agents

Temporarily continuous - a continuously running process Communicative agent - exchanging information with other agents to complete its

task Mobile agent - capable of moving from one machine to another one (or from one

environment to another) Flexible agent - whose actions are not scripted Character - an agent with conversation skills personality and even emotional

state

32

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents
Page 33: Artificial Intelligence

Artificial Intelligence Chapter1Anurag Dixit

33

  • Table of Contents
  • Symbols
  • Numbers
  • Intelligent Agents
  • Intelligent Agents