Top Banner
Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006
15

Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Jan 02, 2016

Download

Documents

Damon Baldwin
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Lecture 6: Problem Solving

Dr John Levine

52236 Algorithms and ComplexityFebruary 10th 2006

Page 2: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Class admin

• No lab today and no lab next Monday

• Lab work starts next week

• Work in groups of no more than 6 (self selected)

• Single 2 hour session in the Kelvin lab (60 seats)

• Finding a slot – Fri 11-1? Thu 11 for lecture?

• Alternative 1: Mon 11-1?

• Alternative 2: Fri 1-3, moving the people doing the 52222 practical to a different slot

Page 3: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

The story so far

• Lecture 1: Why algorithms are so important

• Lecture 2: Big-Oh notation, a first look at different complexity classes (log, linear, log-linear, quadratic, polynomial, exponential, factorial)

• Lecture 3: simple search in a list of items is O(n) with unordered data but O(log n) with ordered data, and can be even faster with a good indexing scheme and parallel processors

• Lecture 4: sorting: random sort O(n!), naïve sort is O(n2), bubblesort is O(n2), quicksort is O(n log n)

Page 4: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

The story so far

• Lecture 5: more on sorting: why comparison sort is O(n log n), doing better than O(n log n) by not doing comparisons (e.g. bucket sort)

• Lecture 6: harder search: how to represent a problem in terms of states and moves

• Lecture 7: uninformed search through states using an agenda: depth-first search and breadth-first search

• Lecture 8: making it smart: informed search using heuristics; how to use heuristic search without losing optimality – the A* algorithm

Page 5: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Problem Solving using Search

Many diverse problems involve search:

• Route finding

• Blocks World planning

• 15-puzzle

• Rubik’s cube

• Travelling salesman problem

• Crisis management

• Logistics planning

Page 6: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

What are Problems?

Each of these problems can be characterised by:

• Problem states, including the start state and the goal state

• Legal moves, or actions which transform problem states into other states

• Example: Rubik’s cube

• The start state is the muddled up cube, the goal is to have the state in which all sides are the same colour and the moves are the rotations of sides of the cube

Page 7: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Solutions

• Solutions are sequences of moves which transform the start state into the goal state

• The quality of the solution required will affect the amount of work we need to do

– any solution will do

– fixed amount of time, return best solution

– near optimal solution needed

– optimal solution needed

Page 8: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

How to Solve it

• In general, problems can be solved using

– knowledge

– search

– or some combination of the two

Page 9: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Formulating Problems

• A good formulation saves work

– less search for the answer

• Three requirements for a search algorithm:

– formal structures to describe the states

– rules for manipulating them

– identifying what constitutes a solution

• This gives us a state space representation

Page 10: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

State Space Representation

• A state space comprises

– states: snapshots of the problem

– operators: how to move from one state to another

Example problem: Towers of Hanoi

Only move one disc at a time

Never put a larger disc on top of a smaller one

Page 11: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

State Space Search

Problem solving using state space search consists ofthe following four steps:

1. Design a representation for states (including the initial state and the goal state)

2. Characterise the operators

3. Build a goal state recogniser

4. Search through the state space somehow by considering (in some or other order) the states reachable from the initial and goal states

Page 12: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Example: Blocks World

A “classic” problem in AI planning

The aim is to rearrange the blocksusing the single robot arm so thatthe configuration in the goal stateis achieved

An optimal solution performs thetransformation using as few stepsas possible

Any solution: linear complexity

Optimal solution: exponentialcomplexity (NP hard)

C

A

B

B

A

C

Page 13: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Blocks World Representation

The blocks world problem can be represented as:

• States: stacks are lists, states are sets of stacks e.g. initial state = { [a,b],[c] }

• Transitions between states can be done using a single move operator: move(x,y) picks up object x and puts it on y (which may be the table)

{ [a,b,c] } { [b,c],[a] }by applying move(a,table)

{ [a],[b,c] } { [a,b,c] }by applying move(a,b)

Page 14: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

Blocks World Representation

• NextStates(State) list of legal moves and resulting statese.g. NextStates({ [a,b],[c] })

{ [a],[b],[c] } by applying move(a,table){ [b],[a,c] } by applying move(a,b){ [c,a,b] } by applying move(c,a)

• Goal(State) returns true if State is identical with the goal state

• Search the space: start with the start state, explore reachable states, continue until the goal state is found

Page 15: Lecture 6: Problem Solving Dr John Levine 52236 Algorithms and Complexity February 10th 2006.

The Power of Search

In May 1997, Garry Kasparov isbeaten by Deep Blue

Deep Blue is not a conventionalcomputer: it uses specialisedhardware to search 200,000,000chess positions per second

Is Deep Blue intelligent?