Top Banner
DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Artificial Intelligence Dr. Dennis Stevenson School of Computing Clemson University Fall 2012 Abstract Artificial intelligence is a broad discipline of Computer Science dealing with creating applica- tions which behave in a “human-like” way. For many, the first exposure to the idea, and often the first notion when asked about the subject, is a computer opponent for a game. For as long as computer games have existed, so has the interest in creating a challenge for the human – whether it be an algorithm that generates difficult, yet feasible puzzles or an algorithm which replaces the need for a human player in a game. This paper is a case study of implementing a computer player for the strategy board game DVONN. It presents a short survey of game- playing AIs, a description of the DVONN game, some background on the theory of the under- lying search algorithms, a description of the im- plementation, results of the implementation, and concluding remarks. 1 Introduction John McCarthy defines artificial intelligence as “the science and engineering of making intelli- gent machines” [6], with his definition of intel- ligence defined in terms of human intelligence. In this respect, the task of creating a computer opponent to replace a human in a board game falls within the realm of artificial intelligence. Designing computer opponents for board games has long been an interest of computer scientists. Discrete-time board games (or turn-taking board games) in particular have simple enough seman- tics that they readily translate to a computer program. However, the simplicity of the game semantics does not imply the game is simple for a computer to play. A computer opponent will easily become overwhelmed in the massive state space of a board game that is at least mildly challenging for humans. Thus, many game-playing intelligent agents (IA’s) employ a optimizing search technique to find a solution using the game semantics. It is impossible to search all possible states, so much of the challenge of designing these algo- rithms comes from deciding the best way to prune branches from the search tree. A na¨ ıve solution crawls the search tree for a fixed num- ber of moves, called plies, into the game tree. One will soon find that this strategy is insuffi- cient, as some moves put the player at a greater advantage over the opponent and should be ex- plored to a sufficient depth. Here, the concept of quiescence is introduced, which allows the IA to explore the game to a greater depth for moves which may more likely lead to a win. The rest of this paper is laid out as follows. First, we present a short survey of game-playing IA’s and some of the techniques which make them fast, efficient, and challenging. Next, we give the reader a quick introduction to the strat- egy game of DVONN. We then introduce the 1
13

DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

Jun 09, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

DVONN and Game-playing Intelligent Agents

Paul KilgoCPSC 810: Introduction to Artificial Intelligence

Dr. Dennis StevensonSchool of ComputingClemson University

Fall 2012

Abstract

Artificial intelligence is a broad discipline ofComputer Science dealing with creating applica-tions which behave in a “human-like” way. Formany, the first exposure to the idea, and oftenthe first notion when asked about the subject, isa computer opponent for a game. For as long ascomputer games have existed, so has the interestin creating a challenge for the human – whetherit be an algorithm that generates difficult, yetfeasible puzzles or an algorithm which replacesthe need for a human player in a game.

This paper is a case study of implementinga computer player for the strategy board gameDVONN. It presents a short survey of game-playing AIs, a description of the DVONN game,some background on the theory of the under-lying search algorithms, a description of the im-plementation, results of the implementation, andconcluding remarks.

1 Introduction

John McCarthy defines artificial intelligence as“the science and engineering of making intelli-gent machines” [6], with his definition of intel-ligence defined in terms of human intelligence.In this respect, the task of creating a computeropponent to replace a human in a board gamefalls within the realm of artificial intelligence.Designing computer opponents for board games

has long been an interest of computer scientists.Discrete-time board games (or turn-taking boardgames) in particular have simple enough seman-tics that they readily translate to a computerprogram. However, the simplicity of the gamesemantics does not imply the game is simple fora computer to play. A computer opponent willeasily become overwhelmed in the massive statespace of a board game that is at least mildlychallenging for humans.

Thus, many game-playing intelligent agents(IA’s) employ a optimizing search technique tofind a solution using the game semantics. Itis impossible to search all possible states, somuch of the challenge of designing these algo-rithms comes from deciding the best way toprune branches from the search tree. A naıvesolution crawls the search tree for a fixed num-ber of moves, called plies, into the game tree.One will soon find that this strategy is insuffi-cient, as some moves put the player at a greateradvantage over the opponent and should be ex-plored to a sufficient depth. Here, the concept ofquiescence is introduced, which allows the IAto explore the game to a greater depth for moveswhich may more likely lead to a win.

The rest of this paper is laid out as follows.First, we present a short survey of game-playingIA’s and some of the techniques which makethem fast, efficient, and challenging. Next, wegive the reader a quick introduction to the strat-egy game of DVONN. We then introduce the

1

Page 2: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

minimax algorithm and formally state the in-tended operation of the implementation. Somebrief notes on the implementation are given, andthe results of an experiment performed are givenalong with the analysis. Finally, we give con-cluding remarks regarding this implementation.

2 Game-Playing IA’s

Perhaps the best known examples of IA’s play-ing board games is in Chess. Deep Blue, whodefeated Chess Grandmaster Gary Kasparov,was one of the more famous Chess-playing IA’s.Campbell et. al. [2] present an interesting his-torical review of how Deep Blue worked. Manyof the techniques talked about in this paper areused in Deep Blue. However, some things whichDeep Blue has which this implementation doesnot include are hardware generation of moves,a parallel architecture, and a team of IBM en-gineers. Deep Blue is also able to reach manyfarther ply deeper than this implementation iscapable of (without being terminated due to im-patience of the operator).

Go is another favorite game for IA’s to tackle.MoGo [5] is a more recent Computer Go imple-mentation which has bested some notable play-ers. It makes use of some Monte Carlo tech-niques for its move selection, and as well exploitsa parallel architecture for move speed up.

There are currently a few DVONN-playingcomputer programs, but not much information isgiven regarding their implementation. A few ofthese programs are shipped as a part of DVONNas a computer game [1, 7, 8]. Others [3, 4] aresome of the top contenders on online board gam-ing sites, winning as much as 75% of their gamesagainst their challengers.

3 DVONN Basics

DVONN is a pure strategy game, falling into thesame category of board games such as Chess, Go,and Othello. It is for two players. The basic goalof DVONN is to have captured more game tiles

Figure 1: Layout of the DVONN game board.(Courtesy: Wikimedia Commons)

than your opponent by the end of the game. Acapture can be performed by moving a stack ofpieces under your control onto another stack ofpieces. A stack is under a player’s control if thetop piece is the player’s color. However, a stackmay not be moved if it is surrounded on all sixsides by other stacks.

DVONN is played on hexagonal grid, with theboard laid out as illustrated in Figure 1. Aplayer may move any stack under his control inthe directions of northeast, northwest, east, west,southeast, or southwest. A stack must move asmany spaces as there are game tiles in the stack.The only valid destinations for a moved stack isanother space with another stack on it. A stackmay not move to an empty space, or off of thegame board.

On a standard DVONN board, there aretwenty-three white and black pieces, and threered pieces. The black and white pieces belongto the players. The red pieces are called “con-trol” pieces. To better understand the functionof these pieces, let us envision the board as agraph. Each hexagon represents a node, and ashared border between each hexagon representsan edge. After each move, a crawl of the graphis performed from each of the positions of thecontrol pieces (if two control pieces are in onestack, they are considered one control piece).Any stacks which were not visited on the crawlare removed from the board.

A DVONN game is played in three discretephases: the placement phase, the movementphase, and the end game. In the following sec-

2

Page 3: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

4

7 1

8

5

Figure 2: The game is over because neitherWhite nor Black can move his pieces.

tions, each of the turn-taking schemes are de-scribed.

3.1 Placement Phase

In this phase, the board starts out empty and thewhite player is given two red pieces and twenty-three white pieces; the black player is given theremaining red piece and black pieces. Eachplayer takes turns placing a tile on an emptyspace on the board, being obligated to place thered pieces first. This process is continued untilthe board is filled.

3.2 Movement Phase

For the movement phase, the white player be-gins again. Each player takes a turn moving alegal piece into a legal spot on the board, as per-taining to definitions of a legal move presentedabove. The bulk of the game takes place in thisphase. The general strategy is to capture youropponents pieces while maintaining your own bystaying near a control tower and keeping yourown pieces in a group.

3.3 End Game

Nearing the end, there may be no legal movesfor a player to make. In this case, that player isskipped and the next player may make a move.This process continues until neither player is ableto make a legal move. A player is obligated tomake a move if he is able. Figure 2 shows anexample of a game which is now terminated.

4 The Minimax Algorithm

Algorithm 1 Naıve Minimax

Require: node, depth ≥ 0Ensure: α is the maximum attainable scoreif depth ≤ 0 ∨ node.isTerminal() then

return node.rank()end ifα⇐ −∞if ¬node.isMaxP layer() then

α⇐ +∞end ifchild⇐ node.firstChild()while child 6= nil do

score⇐ minimax(child, depth− 1)if node.isMaxP layer() then

α⇐ max (α, score)else

α⇐ min (α, score)end ifchild⇐ child.nextSibling()

end whilereturn α

The Minimax algorithm is the basis for thealgorithm that the implementation uses. The ba-sic function of the algorithm is to minimize thepossible loss that can be incurred in a decision-making tree. This is done by applying an heuris-tic “goodness” metric to states in our game tree.The Minimax algorithm will seek out decisionsthat maximize this heuristic.

A pure minimax algorithm is fixed ply. Givenfor a DVONN game there can be many possi-ble pieces to move, each in possibly six differentdirections, this gives our game tree a very highbranching factor. Thus, we are interested in thevarious kinds of improvements we can build uponthe Minimax algorithm.

For a given game with branching factor b ateach possible state, and a ply depth of d, therunning time of the naıve minimax is O(bd) sinceall possible branches are explored.

3

Page 4: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

4.1 Alpha-beta Pruning

Some moves in a Minimax search do not need tobe completely evaluated. If at some point dur-ing our search of the child nodes in the Minimaxsearch, if we encounter a move which is worsethan one of the moves encountered in the ances-tor nodes or siblings, we can cease to evaluate theremaining children because the rest will have noeffect on the outcome of the Minimax algorithm.

Alpha-beta pruning is highly desirable for aMinimax search since it improves the runningtime on average while still returning the samesolution. The only necessary changes is the ad-dition of an α and β which are the best scores forthe maximizing and minimizing players respec-tively.

In the worst case, no nodes are pruned and weare left with a running time which is exactly thesame as the simple Minimax algorithm, O(bd).In the best case, all of the first player’s movesmust be explored, but only one of the secondplayer’s moves must be examined to refute all therest with this pattern repeating into the game

tree, with a final running time of O(bd2 ).

4.2 Quiescence

A fixed-ply search strategy does not work as wellas one would like. It makes the assumption thatall moves equally well worth exploring. However,in examining human behavior this is almost al-ways not how humans operate. For example, ina game of Chess, a human might not considerusing a move which certainly will allow his op-ponent to place him in checkmate the next move.

For the case of DVONN, however the quies-cence of a move, or more plainly how uninter-esting it is, helps the IA make a move withinreasonable time. At the beginning of a DVONNgame there are many possible moves and theyare almost all identically beneficial, so there isnot much use in exploring all of them to greatdepth. When the IA finds a move which is in-teresting enough, it can explore that branch inmore depth than it would in others.

Algorithm 2 Alpha-Beta Pruning

Require: node, depth ≥ 0Ensure: α is the maximum attainable scoreif depth ≤ 0 ∨ node.isTerminal() then

return node.rank()end ifchild⇐ node.firstChild()while child 6= nil do

score⇐ search(child, α, β, depth− 1)if node.isMaxP layer() then

α⇐ max (α, score)else

β ⇐ min (α, score)end ifif β ≥ α then

breakend ifchild⇐ child.nextSibling()

end whileif node.isMaxP layer() then

return αelse

return βend if

Algorithm 3 Quiescence

Require: parent, child, depth ≥ 0if |parent.rank()− child.rank()| ≥ ε then

return search(child, α, β, depth)else

return search(child, α, β, depth− 1)end if

4

Page 5: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

There must be an heuristic for determininghow interesting a move is. When a move ismore interesting, we say it is noisy, or otherwisequiet. When a noisy node is found, we shouldexplore it in more depth.

Algorithm 3 shows an example for how quies-cence is used for this implementation, with ap-propriate changes made to Algorithm 2 to useit as a subroutine. For this implementation wedefine a parameter, ε, which is a threshold forthe amount of change in the rank between theparent state and the child state. If the changeexceeds the threshold, we do not charge a depthpenalty for the child node.

5 Implementation

An IA was implemented in Common Lisp whichemploys the above algorithms. For the rankingfunction used by the implementation a simplescore difference between the player and opponentis used. For noisiness, a simple threshold checkis used between the rank of the parent and childstates. The final algorithm used is very close toAlgorithm 3 presented in this paper.

This solution implements part of an IA andgame semantics for DVONN. The tile-placingphase of the game was omitted for more focuson the move-making phase of the game.

An example of two instances of these IA’s fac-ing each other may be found in Appendix A.

This implementation also supports playingwith a human player, albeit completely in textmode. For information on executing this imple-mentation, please see the README file in the im-plementation directory. It contains the necessaryinformation to play a match against the IA.

6 Experiment

In order to evaluate the effectiveness of the im-plementation, several different tests were de-signed. Specific goals of the evaluation are todetermine a few criteria:

1. The IA performs better than a randomplayer.

2. Quiescence search performs better withlower thresholds.

It should be noted we are not too interested inthe timing performance, more so the quality ofthe move selection. To test these goals, severaldifferent IA players were pitted against one an-other, varying the fixed ply and the quiescencethreshold for each in order to observe the result.We repeat each game for 40 trials (each time witha randomized board), with the exception beingthe control case of two random players, whichwas repeated 120 times due to cheapness of com-putation. For each game, only the final score foreach player was measured. An additional statis-tic called ’Rank’ is introduced which is the differ-ence between the black and white player’s scoreswhich is also the metric by which the minimaxalgorithm ranks states for the black player. Ineach of the cases, it is constructed such that theblack player is the intended winner. In 1 a spe-cial notation is used to denote the two param-eters of the players, which is ‘(Ply, Threshold)’where ’Ply’ is the fixed ply of the player and’Threshold’ is the quiescence threshold. So, forexample, a (0,∞) player is a random player sinceit has a fixed ply of zero and a positively infinitequiescence threshold.

For evaluating criteria 1, we use the null hy-pothesis H0 : Rank > 0 on the (0,∞) vs. (1,∞)case. We can see that the 95% confidence inter-val is well above the zero mark, and therefore wecan accept the null hypothesis.

For evaluating criteria 2, we use the null hy-pothesis H0 : Rank > 0 on the remaining cases(save for the control case of two random players).We find that the null hypothesis holds in somecases, however it is inconclusive in most of them.We also have the really bizarre result that it isstatistically significant that white is favored towin in a (1, 8) vs. (1, 6) match. There is sometrending present with a lower threshold, so wemight see that a threshold of at least 3 provides

5

Page 6: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

Table 1: A comparison of different IA’s being pitted against one another.Game Type Wins (W) Wins (B) Ties Score (W) Score (B) Rank

(1 8) vs. (1 3) 17 23 0 8.15 ± 2.13 9.77 ± 2.07 1.62 ± 1.24

(1 8) vs. (1 4) 16 24 0 8.40 ± 2.02 9.42 ± 1.92 1.02 ± 1.13

(1 8) vs. (1 5) 19 21 0 7.59 ± 2.08 6.32 ± 2.70 -1.27 ± 1.42

(1 8) vs. (1 6) 23 15 2 9.40 ± 2.36 7.07 ± 2.14 -2.32 ± 1.28

(1 8) vs. (1 7) 20 19 1 7.54 ± 2.25 7.92 ± 2.07 0.37 ± 1.31

(1 ∞) vs. (1 4) 17 22 1 8.47 ± 2.29 9.15 ± 2.41 0.67 ± 1.36

(0 ∞) vs. (1 ∞) 1 39 0 3.14 ± 3.41 19.67 ± 1.75 16.52 ± 1.37

(0 ∞) vs. (0 ∞) 59 53 8 10.15 ± 3.48 9.80 ± 3.69 -0.34 ± 2.37

reasonable results for the time of computation,and also allows us to accept H0.

7 Conclusion

The minimax search algorithm, when givenproper modifications, is a reasonable and inex-pensive (when parametrized carefully) way tocreate a challenging computer opponent. At thevery least, it is better than a random opponent,and with the proper adjustment of the quiescencethreshold, it can perform better than just a fixed-ply search.

The rules of DVONN do not prohibit the useof standard game-playing IA techniques, and theminimax algorithm can work quite well for it. Itmay be difficult to tell how well this algorithmworks against humans as, for the bulk of thetimeline of this project, the minimax algorithmwould errantly return random moves (and stillbeat its human opponents!); so future work forthis IA might be a more formal study of its playagainst humans.

Also, this implementation uses a rankingheuristic which is not all that sophisticated anddoes not fully encapsulate more complex strate-gies in DVONN. For example, a seasoned playermight capture a control point and move it awayfrom a reservoir of his opponent’s pieces so thatthey will be removed from the board, or keephis pieces grouped together so that he has morecontrol over whether his own are removed. How-ever, the chosen ranking function does not sup-

port such strategy as it seeks only to maximizeits score. Therefore, another study could be con-ducted to test different ranking strategies to seewhich perform better than others.

A Sample Game

This section presents a sample game between a 1-ply quiescence-enabled player (black) and a fixed1-ply player (white) in an attempt to demon-strate the difference which quiescence can make.More attention should be paid to moves 18 andon where black ultimately takes a valuable piecewhich white leaves vulnerable.

A.1 White Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

White: 23Black: 23White moves (7 4) E.

6

Page 7: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

-5

0

5

10

15

20

25

(1 8) vs (1 3)

(1 8) vs (1 4)

(1 8) vs (1 5)

(1 8) vs (1 6)

(1 8) vs (1 7)

(1 ∞) vs (1 4)

(0 ∞) vs (1 ∞

)

(0 ∞) vs (0 ∞

)

White ScoreBlack Score

Rank

Figure 3: Bar chart of scores and ranks with associated confidence intervals.

A.2 Black Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 2 1

White: 24Black: 22Black moves (9 4) W.

A.3 White Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 3

White: 22Black: 24

White moves (8 3) SW.

A.4 Black Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 4

White: 25Black: 21Black moves (4 4) E.

7

Page 8: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.5 White Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 2 1 4

White: 24Black: 22White moves (6 4) W.

A.6 Black Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 3 4

White: 26Black: 20Black moves (4 3) SE.

A.7 White Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 4 4

White: 23Black: 23White moves (6 3) NE.

A.8 Black Move

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 2 1 1 1

1 1 1 1 1 1 1

1 1 1 4 4

White: 24Black: 22Black moves (5 4) NW.

A.9 White Move

1 1 5 1 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 2 1 1 1

1 1 1 1 1 1 1

1 1 1 4

White: 23Black: 23White moves (4 0) W.

A.10 Black Move

1 1 6 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 2 1 1 1

1 1 1 1 1 1 1

1 1 1 4

White: 28Black: 18Black moves (2 0) E.

8

Page 9: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.11 White Move

1 7 1 1 1 1 1

1 1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 2 1 1 1

1 1 1 1 1 1 1

1 1 1 4

White: 22Black: 24White moves (3 1) NW.

A.12 Black Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 2 1 1 1

1 1 1 1 1 1 1

1 1 1 4

White: 29Black: 17Black moves (3 3) NE.

A.13 White Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 2 1 1 2 1 1 1

1 1 1 1 1 1

1 1 1 4

White: 28Black: 18White moves (5 2) W.

A.14 Black Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

1 1 1 1 3 1 2 1 1 1

1 1 1 1 1 1

1 1 1 4

White: 30Black: 16Black moves (0 3) NW.

A.15 White Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

2 1 1 1 3 1 2 1 1 1

1 1 1 1 1

1 1 1 4

White: 29Black: 17White moves (8 2) E.

A.16 Black Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

2 1 1 1 3 1 2 2 1

1 1 1 1 1

1 1 1 4

White: 30Black: 16Black moves (9 3) NW.

9

Page 10: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.17 White Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1 1

2 1 1 1 3 1 2 3 1

1 1 1 1

1 1 1 4

White: 28Black: 18White moves (8 1) SE.

A.18 Black Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1

2 1 1 1 3 1 2 4 1

1 1 1 1

1 1 1 4

White: 31Black: 15Black moves (10 2) W.

A.19 White Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1

2 1 1 1 3 1 2 5

1 1 1 1

1 1 1 4

White: 27Black: 19White moves (7 2) E.

A.20 Black Move

1 8 1 1 1 1 1

1 1 1 1 1 1 1 1

2 1 1 1 3 1 7

1 1 1

1 1 1

White: 27Black: 14Black moves (3 2) NW.

A.21 White Move

1 8 1 1 1 1 1

1 1 2 1 1 1 1 1

2 1 1 3 1 7

1 1 1

1 1 1

White: 26Black: 15White moves (9 2) W.

A.22 Black Move

1 8 1 1 1 1 1

1 1 2 1 1 1 1 1

2 1 8 3 1

1 1 1

1 1 1

White: 27Black: 14Black moves (0 2) E.

10

Page 11: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.23 White Move

1 8 1 1 1 1 1

1 1 2 1 1 1 1 1

1 10 3 1

1 1 1

1 1 1

White: 19Black: 22White moves (4 2) W.

A.24 Black Move

1 8 1 1 1 1 1

1 1 2 1 1 1 1 1

4 10 1

1 1 1

1 1 1

White: 20Black: 21Black moves (2 1) E.

A.25 White Move

1 1 1 1 1 1

1 1 3 1 1 1 1

4 10 1

1 1 1

1 1 1

White: 12Black: 21White moves (5 1) W.

A.26 Black Move

1 1 1 1 1 1

1 1 4 1 1 1

4 10 1

1 1 1

1 1 1

White: 15Black: 18Black moves (1 1) SW.

A.27 White Move

1 1 1 1 1 1

1 4 1 1 1

5 10 1

1 1 1

1 1 1

White: 11Black: 22White moves (7 1) NE.

A.28 Black Move

1 1 1 1 2 1

1 4 1 1

5 10 1

1 1 1

1 1 1

White: 12Black: 21Black moves (9 0) W.

11

Page 12: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.29 White Move

1 1 1 1 3

1 4 1 1

5 10 1

1 1 1

1 1 1

White: 10Black: 23White moves (7 0) SW.

A.30 Black Move

1 1 1

1 4 2 1

5 10 1

1 1 1

1 1 1

White: 10Black: 20Black moves (1 2) E.

A.31 White Move

1 1

4 2 1

10 6

1 1 1

1 1 1

White: 8Black: 20White moves (5 0) E.

A.32 Black Move

2

2 1

10 6

1 1 1

1 1 1

White: 4Black: 20Black moves (3 4) W.

A.33 White Move

2

2 1

10 6

1 1 1

1 2

White: 4Black: 21White moves (6 1) SW.

A.34 Black Move

1

10 6

1 1 3

1 2

White: 3Black: 21Black moves (1 4) E.

12

Page 13: DVONN and Game-playing Intelligent Agentspkilgo/pdf/dvonn.pdf · 2016-04-12 · DVONN and Game-playing Intelligent Agents Paul Kilgo CPSC 810: Introduction to Arti cial Intelligence

A.35 White Move

1

10 6

1 1 3

3

White: 3Black: 21White moves (5 3) W.

A.36 Black Move

1

10

1 4

3

White: 4Black: 14Black moves (1 3) NE.

A.37 White Move

1

11

4

3

White: 4Black: 14White cannot move.

A.38 Black Move

1

11

4

3

White: 4Black: 14

References

[1] Matthias Bodenstein. Dvonner.http://matztias.de/spiele/dvonner/dvonner-e.htm, 2003. Accessed 19 November, 2012.

[2] Murray Campbell, A.Joseph Hoane Jr., andFeng hsiung Hsu. Deep blue. Artificial Intel-ligence, 134(12):57 – 83, 2002.

[3] FatPhil. Rororo the bot.http://www.littlegolem.net/jsp/info/player.jsp?plid=14203,2012. Accessed 19 November, 2012.

[4] Jan. Jan’s program.http://www.littlegolem.net/jsp/info/player.jsp?plid=3107,2012. Accessed 19 November, 2012.

[5] C.S. Lee, M.H. Wang, G. Chaslot, J.B.Hoock, A. Rimmel, O. Teytaud, S.R. Tsai,S.C. Hsu, and T.P. Hong. The computationalintelligence of mogo revealed in taiwan’s com-puter go tournaments. Computational Intel-ligence and AI in Games, IEEE Transactionson, 1(1):73–89, 2009.

[6] John McCarthy. What is artificial intelli-gence?, 2007. Accessed 18 November, 2012.

[7] Martin Trautmann. Holtz.http://holtz.sourceforge.net/, 2011. Ac-cessed 19 November, 2012.

[8] Nivo Zero. dDvonn.http://www.nivozero.com/DVONN/, 2001.Accessed 19 November, 2012.

13