CS 363 Artificial Intelligence Week 1 1. Comparison between Human and Computers with respect to intelligent factors Answer: Intelligent Parameters Human Computer Save data Limite d Yes Interaction (Speech, Audio, Video, Listen and so on) Yes No Reasoning Yes No Common Sense Yes No Decision Making Yes No Learning Ability Yes No 2. Define Artificial Intelligence Artificial Intelligence: It is the study of how to make computers with human Intelligence Think Human: Reasoning, Decision Making, Learning Ability Think rationally: Reason and Act Act Humanly: Do things in a better way on the movement like human beings. Act Rationally: Intelligent Agents 3. List foundations of AI? Answer: Mathematics, Philosophy, Psychology, Neuro Science , Computer engineering and Control theory. 4. Write the history of Artificial Intelligence? The gestation of artificial intelligence (1943–1955)
57
Embed
faculty.psau.edu.sa€¦ · Web viewInsert the root node into the queueWhile the queue is not empty Dequeue the element with the highest priority (If priorities are same, alphabetically
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
CS 363 Artificial Intelligence Week 1
1. Comparison between Human and Computers with respect to intelligent factors
Answer:
Intelligent Parameters Human ComputerSave data Limited YesInteraction (Speech, Audio, Video, Listen and so on) Yes NoReasoning Yes NoCommon Sense Yes NoDecision Making Yes NoLearning Ability Yes No2. Define Artificial Intelligence
Artificial Intelligence: It is the study of how to make computers with human Intelligence
Act Humanly: Do things in a better way on the movement like human beings.
Act Rationally: Intelligent Agents
3. List foundations of AI?
Answer: Mathematics, Philosophy, Psychology, Neuro Science , Computer engineering and Control theory.
4. Write the history of Artificial Intelligence?
The gestation of artificial intelligence (1943–1955)
The birth of artificial intelligence (1956)
Early enthusiasm, great expectations (1952–1969)
A dose of reality (1966–1973)
Knowledge-based systems: The key to power? (1969–1979)
AI becomes an industry (1980–present)
5. Does AI have applications?
Answer:
Autonomous planning and scheduling of tasks
Beating Gary Kasparov in a chess match
Steering a driver-less car
Understanding language
Robotic assistants in surgery
Monitoring trade in the stock market to see if insider trading is going on
6. Write the Goals of AI
To make computers more useful by letting them take over dangerous or tedious tasks from human
Understand principles of human intelligence
7. Some Advantages and Dis advantages of Artificial Intelligence
more powerful and more useful computers
new and improved interfaces
solving new problems
better handling of information
relieves information overload
conversion of information into knowledge
The Disadvantages
increased costs
difficulty with software development - slow and expensive
few experienced programmers
few practical products have reached the market as yet.
8. AI techniques?
Answer:
Heuristics: heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching
Pattern recognition : The field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories
Machine learning: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
Week 2 & 3: Agents and Intelligent Functions
1. Define Agent ad its functions along with its examples?
Agent:
An agent is anything that can be viewed as:
perceiving its environment through sensors and
acting upon its environment through actuators
Examples:
• Humans.
• Robotics.
• Web search agent.
2. What do you mean sensors/percepts and actuators/actions of Human and Robot Agents?
Actions: lift a finger, turn left, walk, run, and carry an object,
• Robotic agent:
Sensors: cameras and infrared range finders..
Actuators: hand motors and leg motors
3. Write Agent Function?
• a = F(p)
where p is the current percept , a is the action carried out, and F is the agent function
• F maps percepts to actionsF: P ® A
where P is the set of all percepts, and A is the set of all actions
• In general, an action may depend on all percepts observed so far, not just the current percept, so agent function is redefined as:
• ak = F(p0 p1 p2 …pk)
where p0 p1 p2 …pk is the sequence of percepts observed to date, ak is the resulting action carried out
• F now maps percept sequences to actions
F: P* ® A
4. Write about Structure of Agent?
Answer:
• Agent = architecture + program
• architecture
• device with sensors and actuators
• e.g., A robotic car, a camera, a PC, …
• program
• The implementation of the agent function that runs on the physical agent architecture to produce f
5. Explain Vacuum-cleaner world example by Intelligent Agent?
Answer:
Environment: two locations; square A and B.Percepts: location and content, e.g., [A, Dirty]Actions: Left, Right, Suck, NoOp
Function:
Function VACUUM-AGENT ([location, content]) return an action
if content == Dirty then return Suck
else if R in location == A then return Right
else if R in location == B then return Left
else if content==empty then return NoOp
6. Write about Rational Agents?
Answer:
An agent that "does the right thing (action)”.
• The rationality of an agent depends on 4 things:
The performance measure defining the agent’s degree of success
The percept sequence, the sequence of all the things perceived by the agent
The agent’s knowledge of the environment.The actions that the agent can perform
• A rational agent, for each possible percept sequence, should do an action that maximizes its performance measure based on: the percept sequence and its built-in and acquired knowledge.
7. Rational agent design
To design an intelligent agent we need to specify the task environment ( PEAS) as fully as possible :
• Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
10. Environment Types
Answer:
Fully observable/partially: the agent’s sensors detect all aspects relevant to the choice of action.
Deterministic/stochastic: the next state is completely determined by the current state and the action selected by the agent;
Episodic/sequential: the agent’s experience is divided into “episodes”; the
Quality of the agent’s actions does not depend on previous
Episodes;
Static/dynamic: the environment remains unchanged while the agent is choosing an action;
Discrete/continuous: there are a limited number of distinct, clearly defined
Percepts and actions;
Single/multi-agent: only one agent operates in the environment.
11. Agent types
Answer:
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
Simple reflex agents
Model-based reflex agents
Goal-based agents
Week 4: Uniformed Search Strategies with Solved Examples
1. Define Problem Solving and list its characteristics?
Answer:
• Problem solving is a process of generating solutions from observed or given data.
• A problem is characterized by: set of objects, set of states and set of rules (actions).
• A state is an abstract representation of the agent’s environment, OR combination of objects for agent configuration. It is specified by the values of all objects of interest in the problem.
• Most real world problems are solved by searching for the solution in the problem space.
• Problem space is set of all possible states generated by applying the possible actions on the states. It may contain one or more solutions.
• Solution is a sequence of actions from the start state to goal state.
2. Write the steps for problem solving agents?
Answer:
Goal Formulation Problem Formulation Search Execute Search Algorithm
3. Example problems formulation
Vacuum Cleaner World:
• Problem formulation:
• The agent is in one of two locations, each of which
• Initial state? any configuration for the tiles in the puzzle.
Integer sequences like: <7, 2, 4, 5, 0, 6, 8, 3, 1> as in the figure given
• actions? move blank left, right, up, down
Not all moves always available
• goal test? Does the current state = goal state (given)
sequence <0, 1, 2, 3, 4, 5, 6, 7, 8>
• path cost? 1 unit per move
• 8-puzzle has 362,800 states
• 15-puzzle has 10^12 states
• 24-puzzle has 10^25 states
State Space Search for 8-Puzzle problem formulation
5. Example: 8-queens
• Formulation #1:
• states : any arrangement of 0-8 queens on the board is a state
• Initial state: no queens on the board
• actions: add a queen to any empty square
• goal test : 8 queens are on the board, no queen attacks another.
• Path cost: 0 (we are only interested in the solution).
64.63...57 = 1.8x1014 possible states.
6. Real world problems
Answer:
Route Finding
Touring Problems
VLSI Layout Design
Internet Searching
Robot Navigation
Automatic Assembly Sequencing
7. Traveling Sales person problem
• Suppose a salesman has five cities to visit and then must return home.
• The goal of the problem is to find the shortest path for the salesperson to travel.find the shortest tour that visits all cities without visiting any city twice and return to starting point.
Formulation:
• State : city
• Initial state : at city A
• actions? move from city 1 to city 2
• goal test? a complete tour
• path cost? sum of distances
The solution: By searching through states to find a path/sequence of all cities.
8. A Water Jug Problem
• You have a 4-gallon and
a 3-gallon water jug
• You have a faucet with an unlimited
amount of water
• Find an action sequence that
to get exactly: 2 gallons in 4-gallon jug
A Water Jug: Problem formulation:
• State representation : (x, y)
• x: Contents of 4-gallon
• y: Contents of 3-gallon
• Start state: (0, 0)
• Goal state: (2, n)
• Operators
• Fill the 4-gallon.
• Fill the 3-gallon.
• Empty the 4-gallon jug
• Empty the 3-gallon jug
• Pour water from the 4-gallon jug into the 3-gallon jug until the 3-gallon jug is full
• Pour water from the 3-gallon jug into the 4-gallon jug until the 4-gallon jug is full
State space of Water Jug Problem:
• From the initial state, produce all successive states step by step search tree.
9. Search tree
• Implementation:
• Data structure for node in the search tree:
• Node: state in the state space
• Parent-Node: Predecessor nodes
• Action: The operator that generated the node
• Depth: number of steps along the path from the initial state
• Path Cost: Cost of the path from the initial state to the node
10. Evaluation criteria for Search Strategies
• Optimality: Does the strategy find the best solution (with the lowest path cost)?
• Time and space complexity are measured in terms of:
• b:Maximum branching factor of the search tree
• d: Depth of the least-cost solution
• m: Maximum depth of the state space (may be ∞)
11. Two types of Search Techniques:
Uninformed search (blind search)
Informed search (heuristic search)
Various blind strategies:
Breadth-first search
Uniform-cost search
Depth-first search
Iterative deepening search
12.DATA STRUCTURE - BREADTH FIRST TRAVERSAL
Breadth First Search BFSBFS algorithm traverses a graph in a breadth ward motion and uses a queue to remember to get the next vertex to start a search, when a dead end occurs in any iteration.
As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and G lastly to D. It employs the following rules.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a queue.
Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue. Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty.
Step Traversal Description
1
Initialize the queue.
2
We start from visiting S startingnodestartingnode, and mark it as visited.
3
We then see an unvisited adjacent node from S. In this example, we have three nodes but alphabetically we choose A, mark it as visited and enqueue it.
4
Next, the unvisited adjacent node from S is B. We mark it as visited and enqueue it.
5
Next, the unvisited adjacent node from S is C. We mark it as visited and enqueue it.
6
Now, S is left with no unvisited adjacent nodes. So, we dequeue and find A.
7
From A we have D as unvisited adjacent node. We mark it as visited and enqueue it.
Algorithm:
enqueue(obj) inserts an object into the queue.
dequeue() removes from the queue the object that has been in it the longest, returning this
object.
isEmpty() returns true if the queue currently contains no objects, and false if the queue contains at
Implementation: fringe = queue ordered by path cost
14 Depth First search Algorithm:
Depth First Search (DFS) algorithm traverses a graph in a depth ward motion and uses a stack to remember to get the next vertex to start a search, when a dead end occurs in any iteration.
As in the example given above, DFS algorithm traverses from S to A to D to G to E to B first, then to F and lastly to C. It employs the following rules.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a stack.
Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all the vertices from the stack, which do not have adjacent vertices.)
Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.
Step Traversal Description
1
Initialize the stack.
2
Mark S as visited and put it onto the stack. Explore any unvisited adjacent node from S. We have three nodes and we can pick any of them. For this example, we shall take the node in an alphabetical order.
3
Mark A as visited and put it onto the stack. Explore any unvisited adjacent node from A. Both Sand D are adjacent to A but we are concerned for unvisited nodes only.
4
Visit D and mark it as visited and put onto the stack. Here, we have B and C nodes, which are adjacent to D and both are unvisited. However, we shall again choose in an alphabetical order.
5
We choose B, mark it as visited and put onto the stack. Here Bdoes not have any unvisited adjacent node. So, we pop Bfrom the stack.
6
We check the stack top for return to the previous node and check if it has any unvisited nodes. Here, we find D to be on the top of the stack.
7
Only unvisited adjacent node is from D is C now. So we visit C, mark it as visited and put it onto the stack.
As C does not have any unvisited adjacent node so we keep popping the stack until we find a node that has an unvisited adjacent node. In this case, there's none and we keep popping until the stack is empty.
o Step 1: SET STATUS = 1 (ready state) for each node in Go Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is emptyo Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)o Step 5: Push on the stack all the neighbours of N that are in the ready state (whose STATUS
= 1) and set theirSTATUS = 2 (waiting state)[END OF LOOP]
o Step 6: EXIT
Week 5: informed Searched strategies with solving problems
Heuristics
Best-first search
A* search
1. Heuristic Search Algorithm:
Types of problem it can be applied to: Find 1 optimal solution (when optimum value is known) Find a “close to” optimal solution (the best solution we manage). Heuristics methods we will study: Hill-climbing, Simulated annealing, and Genetic algorithms.
2. Characteristics of heuristic search:
The state space is not fully explored. Randomization is often employed. There is a concept of neighborhood search. Heuristics are applied to explore the solutions. The word “heuristics” means “serving or helping to find or discover” or “proceeding by
trial and error”.
3. General Framework for Heuristic search
--Generic Optimization Problem (maximization):
Instance: A finite set X . an objective function P : X → Z. m feasibility functions gj : X → Z, 1 ≤ j ≤ m.
Find: the maximum value of P(X) subject to X ∈ X and gj (X) ≥ 0, for 1 ≤ j ≤ m
--Define a neighborhood function N : X → 2 X . E.g. N(X) = {X1, X2, X3, X4, X5}. 2 Design a neighborhood search: Algorithm that finds a feasible solution on the neighborhood of a feasible solution X. There are two types of neighborhood searches:
I Exhaustive (chooses best profit among neighbor points)
I Randomized (picks a random point among the neighbor points)
4. Heuristic Search Algorithm:
A generic heuristic search algorithm
Given N, a neighborhood function, the heuristic algorithm hN either:
Perform one neighborhood search (using one of the strategies)
Perform a sequence of j neighborhood searches, where each one takes us from Xi to Xi+1: [X = X0, X1, . . . , Xj = Y ].
Algorithm GenericHeuristicSearch(cmax)
Select a feasible solution X ∈ X ;
Xbest ← X; (stores best so far); c ← 0;
while (c ≤ cmax) do Y ← hN (X); if (Y 6= “fail”) then X ← Y ; if (P(X) > P(Xbest)) then Xbest ← X;
else c ← cmax + 1; (add this if hN is not randomized)]
c ← c + 1;
return Xbest;
5. Hill-Climbing Idea:
Go up the hill continuously
Stop when stuck. Problem:
It can get stuck in a local optimum.
Improvement: run the algorithm many times from different random starting points X.
For Hill-Climbing, hN (X) returns: I Y ∈ N(X) such that Y is feasible and P(Y ) > P(X), I or, otherwise, “fail”.
6. A* Algorithm
Insert the root node into the queueWhile the queue is not empty Dequeue the element with the highest priority (If priorities are same, alphabetically smaller path is chosen) If the path is ending in the goal state, print the path and exit Else Insert all the children of the dequeued element, with f(n) as the priority
7. Greedy Best First Search Algorithm
Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule.
Week 6: Genetic Algorithms
1. A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization and search problems.
Genetic algorithms are categorized as global search heuristics.
Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination).
2. Key terms used in Genetic Algorithms
Individual - Any possible solution
Population - Group of all individuals
Search Space - All possible solutions to the problem
Chromosome - Blueprint for an individual
Trait - Possible aspect (features) of an individual
Allele - Possible settings of trait (black, blond, etc.)
Locus - The position of a gene on the chromosome
Genome - Collection of all chromosomes for an individual
3. Genotype and Phenotype
Genotype:
– Particular set of genes in a genome
Phenotype:
– Physical characteristic of the genotype (smart, beautiful, healthy, etc.)
4. GA Requirements
A typical genetic algorithm requires two things to be defined:
a genetic representation of the solution domain, and
a fitness function to evaluate the solution domain.
A standard representation of the solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way.
The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size that facilitates simple crossover operation.
Variable length representations may also be used, but crossover implementation is more complex in this case.
Tree-like representations are explored in Genetic programming.
5. GA Representation
Chromosomes could be:
Bit strings (0101 ... 1100)
Real numbers (43.2 -33.1 ... 0.0 89.2)
Permutations of element (E11 E3 E7 ... E1 E15)
Lists of rules (R1 R2 R3 ... R22 R23)
Program elements (genetic programming)
... any data structure ...
6. GA Solution
The fitness function is defined over the genetic representation and measures the quality of the represented solution.
The fitness function is always problem dependent.
For instance, in the knapsack problem we want to maximize the total value of objects that we can put in a knapsack of some fixed capacity.
A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack.
Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack.
The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, interactive genetic algorithms are used.
7. GA Fitness Functions
8. General Algorithm for GA
Roulette Wheel’s Selection Pseudo Code:
for all members of population
sum += fitness of this individual
end for
for all members of population
probability = sum of probabilities + (fitness / sum)
sum of probabilities += probability
end for
loop until new population is full
do this twice
number = Random between 0 and 1
for all members of population
if number > probability but less than next probability then
you have been selected
end for
end
create offspring
end loop
9. What uses in GA Algorithm?
– Representations
– Mutations:
– Alter each gene independently with a probability pm
– pm is called the mutation rate
– Typically between 1/pop_size and 1/ chromosome_length
– Crossovers
–
– SGA operators: Selection
Week 7: Knowledge Representation and Reasoning. (Rule-based-systems)
1. What is the Knowledge?
Answer:
• Knowledge:
o Description of the world or agent environment.
o Collection of “facts” for some domain. It is a domain-specific.
o Information about a domain that can be used to solve problems in that domain. e.g. Computer science students are smart.
• What kinds of knowledge need to be represented in AI systems:
1. Objects-- Facts about objects in our world domain. e.g. Birds have wings.
2. Events-- Actions that occur in our world. e.g. Fahad plays tennis in Abha.
3. Performance-- A behavior knowledge about how to do things. e.g. internal processes in playing the tennis
4. Meta-knowledge-- knowledge about knowledge.
2. Explain Fundamental activities in AI about knowledge?
Answer:
• Intelligent agent should do the following activities: (KR system tasks)
Perceiving, acquiring knowledge from environment,
Knowledge Representation, representing the world information,
Reasoning, inferring the implications(conclusions) based on what it knows and the choices it has,
Acting, choosing what want to do and carry out.
3. Explain Knowledge base agent?
4. Explain Types of Knowledge?
Procedural Declarative
• Knowledge “about how to do something”.
• Focus on tasks that must be performed to reach a certain goal.
• Represented as procedures, rules, strategies
e.g.
find sum of A+B
Determine which is older, peter or Robert, find their ages.
Find square of number….x*x
Factorial of x… x!...etc
• Knowledge (factual - descriptive) “about something is true or false”.
• Focus on objects and relationships, events.
• Represented as objects, propositions, facts, logic models, semantic nets…
e.g.
A car has 4 tyres.
Peter is older than Robert.
Is Ahmed is teacher?
Answer: yes/no
Who is a teacher?
Answer: Ahmed.
5. Knowledge Representation:
• Knowledge representation:
-KR is to express knowledge in computer-tractable form, so that it can be used to help AI
• agents to perform well. Thus there are two entities needed to deal with in solving problems in AI:
Facts: Truths about the real world, this is called knowledge level.
Representation of the facts: represent the objects in terms of symbols that can be manipulated in programs. this is called symbol level .
-Representation = Syntax + Semantics + Reasoning
e.g., “red "represents color red. “car1”represents my car.
red(car1) represents fact that my car is red.
Syntax : How sentences are formed in the langauge?
Semantics: the meaning of the sentences. What sentence refers to in the real world?
Computational aspect: how sentences and objects are represented and manipulated to drive the conclusions.
• Suppose the language is arithmetic, then
‘x’, and ‘y’ are components of the language
The syntax says that ‘x >=y’ is a valid sentence in the language, but ’ >>x y’ is not.
The semantics say that ‘x >= y’ is false if y is bigger than x, and true otherwise
Week 8: Knowledge Representation and Reasoning. (PL-Logic) and Recursion.
1. List Types of Logic systems?
Answer:
2. Define Propositional Logic and predicate logic?
Answer:
3. Propositional logic: syntax
Propositional logic sentences:
sentences in propositional logic tell you about what is true or false.
• Each symbol P, Q, R …. is a (atomic) sentence
• Both True and False are (atomic) sentences
• A sentence wrapped in parentheses is a sentence ( )
• If P and Q are sentences, then the following are also sentences.
• P Q conjunction
• P Q disjunction
• P negation
• P Q implication
• P Q equivalence
The above are complex sentences. Precedence is , , , ,
Implication and Bi-conditional :
Conditional Proposition:
• A proposition of the form “if p then q ” or “ p implies q”, represented “p q” is called a conditional proposition.
• “p q” : proposition p is called Hypothesis or antecedent, and the proposition q is the Conclusion or Consequent .
• p q is false when p is true and q is false. otherwise is true.
• e.g. “if John is from Chicago then John is from Illinois”.
2. Bi-conditional Proposition:
• The proposition p q, read “p if and only if q” ,is called bi-conditional.
• p q is true when p and q have the same truth value, i.e., they are both true or both false. Otherwise false.
e.g. “John is married if and only if he has a bachelor”. It is the same as saying
“if John is married then he has a bachelor” and “if he has a bachelor then he is married”.
Tautology, Contradiction and Satisfiability
Example:
• Prove that " It is humid today and if it is humid today then it will rain so it will rain " is a valid argument.
Solution:
• Let us symbolize English sentences by propositional atoms as follows:
A : It is humid today
B : It will rain
• Formula corresponding to the text:
: ((A B) A) B
• From the truth table we can see
that is true under all four
interpretations.
Therefore: is valid argument.
Truth Table for ((A B) A) B
A B A B (A B ) A ((A B ) A) B
T T T T T
T F F F T
F T T F T
F F T F T
Logical Equivalence
• When two compound propositions (logical expressions) S1 , S2 have the same truth value no matter what truth value their propositions have, they are called logically equivalent.
• Notation: S1 S2
• Read as: S1 is equivalent to S2 or S1 and S2 are logically equivalence.
• Example:
Does P⇒Q is equivalent to ¬ P Q?
i.e. Prove: (P⇒Q) ( ¬ P Q).
The proof is given by the truth table :
• Is the statement: (P⇒Q) (Q ⇒P). Check it!!!!!
• Is the statement: (P⇒Q) (¬ Q ⇒ ¬ P). Check it!!!!!
Equivalence rules:
Inference Properties:
• Soundness:
-An inference procedure is sound: If KB |- α then KB |= α
• Completeness:
-An inference procedure is complete: If KB |= α then KB |- α
Inference by Model Checking:
• Is α follows from KB? or
• Does KB infer α? Or
• Prove α. Can we conclude α?
• Is KB α is valid?
Inference Rules:
Inference Rules Approach:
Resolution rule:
• Resolution rule: it is Sound (proved by Truth table):
• Incomplete. (Why?)
Repeated application of the resolution rule to a KB in CNF may fail to derive new valid sentences
Represent the following sentences in first-order logic, using a consistent vocabulary(which you must define):a) Some students took French in spring 2001.b) Every student who takes French passes it.c) Only one student took Greek in spring 2001.d) The best score in Greek is always higher than the best score in French.e) Every person who buys a policy is smart.f) No person buys an expensive policy.g) There is an agent who sells policies only to people who are not insured.
Student(x)Student(x): x is a student; French(y),Greek(y)French(y),Greek(y): y is the course in French, resp. Greek; Take(x,y)Take(x,y): x(student) takes y (course); Pass(x,y)Pass(x,y): x (student) passes y(course); TakeInSpring2001(x,y)TakeInSpring2001(x,y): x (student) takes y (course) in
Spring 2001.
Now we can formulate the sentences as follows:
a) ∃x∃y:Student(x)∧French(y)∧TakeInSpring2001(x,y)∃x∃y:Student(x)∧French(y)∧TakeInSpring2001(x,y)b) ∀x∀y:(Student(x)∧French(y)∧Take(x,y))⟹Pass(x,y)∀x∀y:(Student(x)∧French(y)∧Take(x,y))⟹Pass(x,y)c) ∃x∃y∀z:Student(x)∧Greek(y)∧TakeInSpring2001(x,y)∧((Student(z)∧TakeInSpring2001(z,y))⟹x=z)
All students are smart.
∀ x ( Student(x) ⇒ Smart(x)
) There exists a student.
∃ x Student(x).
There exists a smart student.
∃ x ( Student(x) ∧ Smart(x) )
Every student loves some student
∀ x ( Student(x) ⇒ ∃ y ( Student(y) ∧ Loves(x,y) ))
Every student loves some other student.
∀ x ( Student(x) ⇒ ∃ y ( Student(y) ∧ ¬ (x = y) ∧ Loves(x,y) ))
There is a student who is loved by every other student.
∃ x ( Student(x) ∧ ∀ y ( Student(y) ∧ ¬(x = y) ⇒ Loves(y,x) ))