Top Banner
Toward a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive MS 2A1, Fairfax, VA 22030-4444, USA [email protected] Abstract. This work is a review of the online Comparative Table of Cognitive Architectures (the version that was available at http://bicasymposium.com/cogarch on September 20, 2010). This continuously updated online resource is a collective product of many researchers and developers of cognitive architectures. Names of its contributors (sorted alphabetically by the represented architecture name) are: James S. Albus (4D/RCS), Christian Lebiere and Andrea Stocco (ACT-R), Stephen Grossberg (ART), Brandon Rohrer (BECCA), Balakrishnan Chandrasekaran and Unmesh Kurup (biSoar), Raul Arrabales (CERA-CRANIUM), Fernand Gobet and Peter Lane (CHREST), Ron Sun (CLARION), Ben Goertzel (CogPrime), Frank Ritter and Rick Evertsz (CoJACK), George Tecuci (Disciple), Shane Mueller (EPIC), Susan L. Epstein (FORR), Stuart C. Shapiro (GLAIR), Alexei Samsonovich (GMU BICA), Jeff Hawkins (HTM), David C. Noelle (Leabra), Stan Franklin (LIDA), Pei Wang (NARS), Akshay Vashist and Shoshana Loeb (Nexting), Cyril Brom (Pogamut), Nick Cassimatis (Polyscheme), L. Andrew Coward (Recommendation Architecture), Ashok Goel, J. William Murdock and Spencer Rugaber (REM), John Laird (Soar), and Kristinn Thórisson (Ymir). All these contributions are summarized in this work in a form that makes the described architectures easy to compare against each other. Keywords. Cognitive architectures, model and data sharing, unifying framework Introduction This work is a review of the online resource called Comparative Table of Cognitive Architectures. 1 At the time of writing, this resource is in the process of rapid development. It is a collective product of many researchers and developers of cognitive architectures, who submitted their contributions over email to the author of this paper for posting them on the Internet. Each contribution was posted as is, without sub- editing, which inevitably resulted in some inconsistency in terminology and in interpretation of statements by the contributors. Despite these inconsistencies and the incompleteness of the source, we (the contributors to the online resource) believe that the present snapshot of the Comparative Table of Cognitive Architectures needs to be documented, so that in making further steps authors would be able to use a bibliographic reference to the documentation of the first step. 1 Specifically, the version of Comparative Table of Cognitive Architectures that was available online at http://bicasymposium.com/cogarch/ on September 20, 2010.
50

Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

May 13, 2018

Download

Documents

lamdung
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Toward a Unified Catalog of Implemented Cognitive Architectures

Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason University,

4400 University Drive MS 2A1, Fairfax, VA 22030-4444, USA [email protected]

Abstract. This work is a review of the online Comparative Table of Cognitive Architectures (the version that was available at http://bicasymposium.com/cogarch on September 20, 2010). This continuously updated online resource is a collective product of many researchers and developers of cognitive architectures. Names of its contributors (sorted alphabetically by the represented architecture name) are: James S. Albus (4D/RCS), Christian Lebiere and Andrea Stocco (ACT-R), Stephen Grossberg (ART), Brandon Rohrer (BECCA), Balakrishnan Chandrasekaran and Unmesh Kurup (biSoar), Raul Arrabales (CERA-CRANIUM), Fernand Gobet and Peter Lane (CHREST), Ron Sun (CLARION), Ben Goertzel (CogPrime), Frank Ritter and Rick Evertsz (CoJACK), George Tecuci (Disciple), Shane Mueller (EPIC), Susan L. Epstein (FORR), Stuart C. Shapiro (GLAIR), Alexei Samsonovich (GMU BICA), Jeff Hawkins (HTM), David C. Noelle (Leabra), Stan Franklin (LIDA), Pei Wang (NARS), Akshay Vashist and Shoshana Loeb (Nexting), Cyril Brom (Pogamut), Nick Cassimatis (Polyscheme), L. Andrew Coward (Recommendation Architecture), Ashok Goel, J. William Murdock and Spencer Rugaber (REM), John Laird (Soar), and Kristinn Thórisson (Ymir). All these contributions are summarized in this work in a form that makes the described architectures easy to compare against each other.

Keywords. Cognitive architectures, model and data sharing, unifying framework

Introduction

This work is a review of the online resource called Comparative Table of Cognitive Architectures. 1 At the time of writing, this resource is in the process of rapid development. It is a collective product of many researchers and developers of cognitive architectures, who submitted their contributions over email to the author of this paper for posting them on the Internet. Each contribution was posted as is, without sub-editing, which inevitably resulted in some inconsistency in terminology and in interpretation of statements by the contributors. Despite these inconsistencies and the incompleteness of the source, we (the contributors to the online resource) believe that the present snapshot of the Comparative Table of Cognitive Architectures needs to be documented, so that in making further steps authors would be able to use a bibliographic reference to the documentation of the first step.

1 Specifically, the version of Comparative Table of Cognitive Architectures that

was available online at http://bicasymposium.com/cogarch/ on September 20, 2010.

Page 2: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Historically, the first prototype of the Comparative Table of Cognitive Architectures was made available online on October 27, 2009. The initiative of this comparative table started during preparation of a discussion panel on cognitive architectures that was organized by Christian Lebiere and Alexei Samsonovich at the 2009 AAAI Fall Symposium on BICA2 held in Arlington, Virginia, in November of 2009. This panel, devoted to general comparative analysis of cognitive architectures, involved 15 panelists: Christian Lebiere (Chair), Bernard Baars, Nick Cassimatis, Balakrishnan Chandrasekaran, Antonio Chella, Ben Goertzel, Steve Grossberg, Owen Holland, John Laird, Frank Ritter, Stuart Shapiro, Andrea Stocco, Ron Sun, Kristinn Thórisson, and Pei Wang. The idea was to bring together researchers from disjointed communities that speak different languages and frequently ignore each other’s existence. Quite unexpectedly, the attempt to engage them in common discussion was very successful [1], and after the panel most of the panelists joined the initiative by submitting their entries to the table. The time has come to make the collected entries more visible by summarizing them in a paper. The names of contributors to the online resource (as of September 20, 2010) and the short names of represented by them cognitive architectures are listed below, sorted alphabetically by the architecture name.

1. James S. Albus (representing 4D/RCS), 2. Christian Lebiere and Andrea Stocco (representing ACT-R), 3. Stephen Grossberg (representing ART), 4. Brandon Rohrer (representing BECCA), 5. Balakrishnan Chandrasekaran and Unmesh Kurup (representing biSoar), 6. Raul Arrabales (representing CERA-CRANIUM), 7. Fernand Gobet and Peter Lane (representing CHREST), 8. Ron Sun (representing CLARION), 9. Ben Goertzel (representing CogPrime), 10. Frank Ritter and Rick Evertsz (representing CoJACK), 11. George Tecuci (representing Disciple), 12. Shane Mueller and Andrea Stocco (representing EPIC), 13. Susan L. Epstein (representing FORR), 14. Stuart C. Shapiro (representing GLAIR), 15. Alexei Samsonovich (representing GMU BICA), 16. Jeff Hawkins (representing HTM), 17. David C. Noelle (representing Leabra), 18. Stan Franklin (representing LIDA), 19. Pei Wang (representing NARS), 20. Akshay Vashist and Shoshana Loeb (representing Nexting), 21. Cyril Brom (representing Pogamut), 22. Nick Cassimatis (representing Polyscheme), 23. L. Andrew Coward (representing Recommendation Architecture), 24. Ashok Goel, J. William Murdock and Spencer Rugaber (representing REM), 25. John Laird and Andrea Stocco (representing Soar), 26. Kristinn Thórisson (representing Ymir).

2 AAAI: Association for the Advancement of Artificial Intelligence. BICA:

Biologically Inspired Cognitive Architectures.

Page 3: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Since the beginning of modern research in cognitive modeling, it was understood that a successful approach in intelligent agent design should be based on integrative cognitive architectures describing complete agents [2]. Since then, the cognitive architecture paradigm proliferated extensively [3-7], resulting in powerful frameworks such as ACT-R and Soar (described below). While these two examples are best known and most widely used, other architectures described here may be known within limited circles only. Nevertheless, every architecture in this review is treated in the same way. This seems necessary in order to be able to compare them all and to understand why cognitive modeling did not result in a revolutionary breakthrough in artificial intelligence during the recent decades. In order to see what vital features or parameters are still missing in modern cognitive architectures, it is necessary to put many disparate approaches next to each other for comparison on equal basis. Only then a unifying framework for integration can emerge. This is the main objective of the online Comparative Table and also of the present work.

Description of each cognitive architecture in this review is based on one and the same template that closely follows the template of the table posted online at http://bicasymposium.com/cogarch.1 The data included here with a consent of each contributor is based on, and corresponds to the data posted in the Comparative Table.1

1. 4D/RCS

This section is based on the contribution of James S. Albus to the Comparative Table of Cognitive Architectures.1 RCS stands for Real-time Control System.

4D/RCS is a reference model architecture for unmanned vehicle systems. 4D/RCS operates machines and drives vehicles autonomously. It is general-purpose and robust in real-world environments. It uses information from battlefield information network, a priori maps, etc. It pays attention to valued goals and makes decisions about what is most important based on rules of engagement and situational awareness.

1.1. Overview

Knowledge and experiences are represented in 4D/RCS using images, maps, objects, events, state, attributes, relationships, situations, episodes, frames. Main components of the architecture include: Behavior Generation, World Modeling, Value Judgment, Sensory Processing, and Knowledge Database. These are organized in a hierarchical real-time control system (RCS) architecture.

The cognitive cycle, or control loop, of 4D/RCS (Figure 1) is based on an internal world model that supports perception and behavior. Perception extracts the information necessary to keep the world model current and accurate from the sensory data stream. Behavior uses the world model to decompose goals into appropriate action.

Most recent representative publication for 4D/RCS is [8]. This architecture was implemented, tested and studied at NIST (http://www.isd.mel.nist.gov/projects/rcslib/) using C++, Windows Real-time, VXworks, Neutral Messaging Language (NML), Mobility Open Architecture Simulation and Tools (MOAST: http://sourceforge.net/projects/moast/), and Urban Search and Rescue Simulation (USARSim: http://sourceforge.net/projects/usarsim/). The list of funding programs,

Page 4: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

projects and environments in which the architecture was used can be found at http://members.cox.net/bica2009/cogarch/4DRCS.pdf.

1.2. Support for Common Components and Features

The framework of 4D/RCS supports the following features and components that are common for many cognitive architectures: working memory, semantic memory, episodic memory, procedural memory, iconic memory (image and map representations), perceptual memory (pixels are segmented and grouped into entities and events), cognitive map, reward system (Value Judgment processes compute cost, benefit, risk), attention control (can focus attention on regions of interest), and consciousness (the architecture is aware of self in relation to the environment and other agents).

Perception BehaviorWorld Model

Sensing Action Real World

internalexternal

Mission Goal

Figure 1. A bird’s eye view of the cognitive cycle of 4D/RCS.

Sensory, motor and other specific modalities include visual input (color, stereo), LADAR, GPS, odometry, inertial imagery. Supported cognitive functionality includes self-awareness.

1.3. Learning, Goal and Value Generation, and Cognitive Development

The following general paradigms and aspects of learning are supported by 4D/RCS: unsupervised learning (updates control system parameters in real-time), supervised learning (learns from subject matter experts), arbitrary mixtures of unsupervised and supervised learning, real-time learning, fast stable learning (uses CMAC algorithm that learns fast from error correction; when no errors, learning stops), learning from arbitrarily large databases (all applications are real-world and real-time), learning of non-stationary databases (battlefield environments change unpredictably).

Learning algorithms used in 4D/RCS include reinforcement learning (learns parameters for actuator backlash, braking, steering, and acceleration). The architecture supports learning of new representations (maps and trajectories in new environments).

1.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

The following general paradigms are implemented in 4D/RCS: problem solving (uses rules and/or search methods for solving problems), decision making (makes decision based on Value Judgment calculations).

Page 5: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

In addition, the following specific paradigms were modeled with this architecture: task switching, Tower of Hanoi/London, dual task, visual perception with comprehension, spatial exploration, learning and navigation, object/feature search in an environment (search for targets in regions of interest), learning from subject matter experts.

2. ACT-R

This section is based on the contribution of Christian Lebiere and Andrea Stocco to the Comparative Table of Cognitive Architectures.1 ACT-R stands for Adaptive Control of Thought - Rational.

2.1. Overview

Knowledge and experiences are represented in ACT-R using chunks and productions. ACT-R is composed of a set of nearly-independent modules that make information accessible to dedicated buffers of limited capacity (Figure 2). Information is encoded in form of chunks and productions rules; production rules are also responsible for accessing and setting chunks in the module buffers.

The ACT theory was originally introduced in [9]. The key reference for ACT-R is [10]. Most recent representative publications include [11]. ACT-R was implemented and studied experimentally at Carnegie Mellon University using Lisp with an optional TCL/Tk interface. Implementations in other programming languages have also been developed by the user community.

Figure 2. A bird’s eye view of ACT-R. The boxes are modules, controlled by a central procedural module through limited-capacity buffers.

Page 6: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

2.2. Support for Common Components and Features

The framework of ACT-R supports the following features and components that are common for many cognitive architectures: semantic memory (encoded as chunks), procedural memory. Working and episodic memory systems are not explicitly defined, but the set of buffers used in inter-module communication can be thought of as providing working memory capacities.

Sensory, motor and other specific modalities include: visual input (propositional, based on chunks), auditory input (propositional, based on chunks), basic motor functions (largely hands and finger control).

2.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in ACT-R include: reinforcement learning (for productions, linear discount version), Bayesian update (for memory retrieval). The architecture supports production compilation (forms new productions) and automatic learning of chunks (from buffer contents).

2.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

General paradigms of modeling studies with ACT-R include: problem solving, decision making, language processing, working memory tests.

In addition, the following specific paradigms were modeled with this architecture: Stroop task (multiple models), task switching (multiple models), Tower of Hanoi/London, psychological refractory period (PRP) tasks, dual task, N-Back. A full list of paradigms modeled in ACT-R and associated publications can be found at: http://act-r.psy.cmu.edu/publications/index.php.

3. ART

This section is based on the contribution of Stephen Grossberg to the Comparative Table of Cognitive Architectures.1 ART stands for Adaptive Resonance Theory.

Biologically-relevant cognitive architectures should clarify how individuals adapt autonomously in real time to a changing world filled with unexpected events; should explain and predict how several different types of learning (recognition, reinforcement, adaptive timing, spatial, motor) interact to this end; should use a small number of equations in a larger number of modules, or microassemblies, to form modal architectures (vision, audition, cognition, etc.) that control the different modalities of intelligence; should reflect the global organization of the brain into parallel processing streams that compute computationally complementary properties within and between these modal architectures; and should exploit the fact that all parts of the neocortex, which supports the highest levels of intelligence in all modalities, are variations of a shared laminar circuit design and thus can communicate with one another in a computationally self-consistent way.

Page 7: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

3.1. Overview of the Architecture and Its Study

Knowledge and experiences are represented in ART (Figure 3) using visual 3D boundary and surface representations; auditory streams; spatial, object, and verbal working memories; list chunks; drive representations for reinforcement learning; orienting system; expectation filter; spectral timing networks. Main components of the architecture include model brain regions, notably laminar cortical and thalamic circuits.

Figure 3. A bird’s eye view of ART (included with permission of Stephen Grossberg).

ART was originally introduced in [12]. Most recent representative publications can be found at http://cns.bu.edu/~steve. ART was implemented and studied experimentally using nonlinear neural networks (with feedback, multiple spatial and temporal scales). Funding programs, projects and environments in which the architecture was used are also listed at the above URL.

3.2. Support for Common Components and Features

The framework of ART supports the following features and components that are common for many cognitive architectures: working memory (recurrent shunting on-center off-surround network that obeys LTM Invariance Principle and Inhibition of Return rehearsal law), semantic memory (limited associations between chunks), episodic memory (limited, builds on hippocampal spatial and temporal representations), procedural memory (multiple explicitly defined neural systems for learning, planning and control of action), iconic memory (emerges from role of top-down attentive interactions in laminar models of how the visual cortex sees), perceptual memory (model development of laminar visual cortex and explain how both fast perceptual learning with attention and awareness, and slow perceptual learning without attention or awareness, can occur), cognitive map (networks that learn entorhinal grid cell and hippocampal place field representations on line), reward system (model how amygdala, hypothalamus, and basal ganglia interact with sensory and prefrontal cortex to learn to direct attention and actions towards valued goals; used to help explain data about

Page 8: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

classical and instrumental conditioning, mental disorders - autism, schizophrenia - and decision making under risk), attention control and consciousness (clarifies how boundary, surface, and prototype attention differ and work together to coordinate object and scene learning). Adaptive Resonance Theory predicts a link between processes of Consciousness, Learning, Expectation, Attention, Resonance, and Synchrony (CLEARS) and that All Conscious States Are Resonant States.

Sensory, motor and other specific modalities include: visual input (natural static and dynamic scenes, psychophysical displays: used to develop emerging architecture of visual system from retina to prefrontal cortex, including how 3D boundaries and surface representations form, and how view-dependent and view-invariant object categories are learned under coordinated guidance of spatial and object attention), auditory input (natural sound streams: used to develop emerging architecture of visual system for auditory streaming and speaker-invariant speech recognition), special modalities (SAR, LADAR, multispectral IR, night vision, etc.).

3.3. Learning, Goal and Value Generation, and Cognitive Development

The following general paradigms and aspects of learning are supported by ART: unsupervised learning (can categorize objects and events or alter spatial maps and sensory-motor gains without supervision), supervised learning (can learn from predictive mismatches with environmental constraints, or explicit teaching signals, when they are available), arbitrary mixtures of unsupervised and supervised learning (e.g., the ARTMAP family of models), real-time learning: both ART (match learning) and Vector Associative Map (VAM; mismatch learning) models use real-time local learning laws; fast stable learning (i.e., adaptive weights converge on each trial without forcing catastrophic forgetting: theorems prove that ART can categorize events in a single learning trial without experiencing catastrophic forgetting in dense non-stationary environments; mismatch learning cannot do this, but this is adaptive in learning spatial and motor data about changing bodies), learning from arbitrarily large databases (i.e., not toy problems: theorems about ART algorithms show that they can do fast learning and self-stabilizing memory in arbitrarily large non-stationary data bases; ART is therefore used in many large-scale applications http://techlab.bu.edu/), learning of non-stationary databases (i.e., when environmental rules change unpredictably).

Learning algorithms used in ART include: reinforcement learning (CogEM and TELOS models of how amygdala and basal ganglia interact with orbitofrontal cortex, etc.), Bayesian effects as emergent properties, a combination of Hebbian and anti-Hebbian properties in learning dynamics, gradient descent methods learning of new representations via self-organization.

3.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with ART include visual and auditory information processing. Specifically, the following paradigms were modeled: problem solving, decision making, analogical reasoning (in rule discovery applications), language processing, working memory tests.

In addition, the following specific paradigms were modeled with this architecture: Stroop task , task switching, Tower of Hanoi/London, N-Back, visual perception with

Page 9: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

comprehension, spatial exploration, learning and navigation, object/feature search in an environment, pretend-play (an architecture for teacher-child imitation).

3.5. Meta-Theoretical Questions, Discussion and Conclusions

Does this architecture allow for using only local computations? Yes. All operations defined by local operations in neural networks. Can it function autonomously? Yes. ART models can continue to learn stably about non-stationary environments while performing in them. Is it general-purpose in its modality; i.e., is it brittle? Can it pay attention to valued goals? Yes. ART derives its memory stability from matching bottom-up data with learned top-down expectations that pay attention to expected data. ART-CogEM models use cognitive-emotional resonances to focus attention on valued goals. Can it flexibly switch attention between unexpected challenges and valued goals? Yes. Top-down attentive mismatches drive attention reset, shifts, and memory search. Cognitive-emotional and attentional shroud mechanisms modulate attention shifts. Can reinforcement learning and motivation modulate perceptual and cognitive decision-making in this architecture? Yes. Cognitive-emotional and perceptual-cognitive resonances interact together for this purpose. Can it adaptively fuse information from multiple types of sensors and modalities? ART categorization discovers multi-modal feature and hierarchical rule combinations that lead to predictive success.

4. BECCA

This section is based on the contribution of Brandon Rohrer to the Comparative Table of Cognitive Architectures.1 BECCA stands for Brain-Emulating Cognition and Control Architecture. It is a general unsupervised learning and control approach based on neuroscientific and psychological models of humans.

BECCA was designed to solve the problem of natural-world interaction. The research goal is to place BECCA into a system with unknown inputs and outputs and have it learn to successfully achieve its goals in an arbitrary environment. The current state-of-the-art in solving this problem is the human brain. As a result, BECCA's design and development is based heavily on insights drawn from neuroscience and experimental psychology. The development strategy emphasizes physical embodiment in robots.

4.1. Overview

Knowledge and experiences are represented in BECCA using discrete, symbolic percepts and actions. Raw sensory inputs are converted to symbolic percepts, and symbolic actions are refined into specific motor commands. A bird’s eye view of the architecture is shown in Figure 4. Main components can be characterized as follows. Episodic learning and procedural learning are modeled in S-Learning, which is based on sequence representations in the hippocampus. Semantic memory is modeled using Context-Based Similarity (CBS). Perception (conversion of raw sensor data into symbolic percepts) is performed using X-trees, which are based on the function of the cortex. Action refinement is performed using S-trees, which are based on the function of the cerebellum.

Page 10: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Figure 4. A bird’s eye view of BECCA.

BECCA was originally introduced in [13]. Most recent representative publications include [14,87]. BECCA was implemented and tested / studied experimentally, Java code for robot implementations and MATLAB prototypes of individual components can be found at: http://sites.google.com/site/brohrer/source.

4.2. Support for Common Components and Features

The framework of BECCA supports the following features and components that are common for many cognitive architectures: working memory (a fixed number of recent percepts and actions are stored for use in decision making in S-learning), semantic memory (percepts are clustered into more abstract percepts using Context-Based Similarity; percepts' membership in a cluster is usually partial, and a percept may be a member of many clusters), episodic memory, procedural memory (both episodic and procedural memories are represented in S-learning as sequences of percepts and actions), iconic memory (percepts identified by X-trees may be accessed by S-learning until they are “overwritten” by new experience), cognitive map (somewhat; the processing performed by X-trees results in related percepts being located “near” each other; for instance, it generates visual primitives similar to the direction-specific fields in V1), reward system (basic rewards, such as for finding an energy source or receiving approval, are an integral part of the system; more sophisticated goal-seeking behaviors are predicated on these).

Sensory, motor and other specific modalities: BECCA makes no assumptions about the nature of its inputs. It can handle inputs of any nature, including visual, auditory, tactile, proprioceptive, and chemical. It can equally well handle inputs from nonstandard modalities, including magnetic, radiation-detection, GPS, and LADAR. It can, in addition, handle symbolic inputs, such as ASCII text, days of the week, and other categorical data. It has been implemented with color vision, ultrasonic range finders, and ASCII text input.

4.3. Learning, Goal and Value Generation, and Cognitive Development

The following general paradigms and aspects of learning are supported by BECCA: unsupervised learning (X-trees form percepts from raw data with no teaching signal), supervised learning (at a high level: supervised learning can be observed when BECCA

Page 11: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

is interacting with a human coach that provides verbal commands and expresses approval). Learning algorithms used in BECCA include reinforcement learning, Bayesian update, Hebbian learning, learning of new representations. Specifically, S-learning falls under the category of Temporal-Difference learning algorithms, which are one flavor of reinforcement learning. When evaluating the outcome of an action, past experience is recalled and summarized in Bayesian terms. The probabilities of outcomes are then used to make the decision. Although BECCA uses no neural networks, X-trees cluster inputs that “fire together”. X-trees learn percepts from raw data. Context-based similarity learns abstract concepts from sequences of percepts.

4.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with BECCA include natural-world interaction, i.e. an embodied agent interacting with an unknown and unmodeled environment, and natural language processing. Specifically, the following paradigms were modeled: problem solving (occurs through trial and error and through application of similar past experiences), decision making (S-learning makes decisions: chooses actions based on past sequences of experiences that led to a rewarded state), analogical reasoning (in an emergent fashion), language processing, spatial exploration, learning and navigation, object/feature search in an environment.

5. BiSoar

This section is based on the contribution of Balakrishnan Chandrasekaran and Unmesh Kurup to the Comparative Table of Cognitive Architectures.1

During the CogArch panel at BICA 2009, Chandrasekaran called attention to the lack of support in the current family of cognitive architectures for perceptual imagination, and cited his group’s DRS system that has been used to help Soar and ACT-R engage in diagrammatic imagination for problem solving.

5.1. Overview

Knowledge and experiences are represented in biSoar using the representational framework of Soar plus diagrams – the diagrammatic part can also be combined with any symbolic general architecture, such as ACT-R. Main components of the architecture include Diagrammatic Representation System (DRS) used for diagram representation; plus perceptual and action routines to get information from and create/modify diagrams.

BiSoar was originally introduced in [15]. Other key references include [16]. BiSoar was implemented and tested / studied experimentally using the Soar framework.

5.2. Support for Common Components and Features

The framework of biSoar supports the following features and components that are common for many cognitive architectures: working memory (plus diagrammatic working memory), procedural memory (extends Soar’s procedural memory to diagrammatic components), perceptual memory, cognitive map (emergent), attention

Page 12: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

control (Soar’s framework). Sensory and special modalities include visual input (diagrams), imagery (diagrammatic), spatial cognition, etc.

5.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in biSoar include reinforcement learning (extends Soar's chunking for diagrammatic components).

5.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

The main general paradigm of modeling studies with biSoar is problem solving. In addition, the following specific paradigms were modeled with this architecture: spatial exploration, learning and navigation.

6. CERA-CRANIUM

This section is based on the contribution of Raul Arrabales to the Comparative Table of Cognitive Architectures.1

CERA-CRANIUM is a cognitive architecture designed to control a wide variety of autonomous agents, from physical mobile robots [17] to computer game synthetic characters [18]. The main inspiration of CERA-CRANIUM is the Global Workspace Theory [19]. However, current design also takes the inspiration from other cognitive theories of consciousness and emotions.

6.1. Overview

CERA-CRANIUM consists of two main components (see Figure 5): CERA, a control architecture structured in layers; and CRANIUM, a tool for the creation and management of high amounts of parallel processes in shared workspaces. CERA uses the services provided by CRANIUM with the aim of generating a highly dynamic and adaptive perception mechanism.

Knowledge and experiences are represented in CERA-CRANIUM using single percepts, complex percepts, and mission percepts. Actions are generated using single and complex behaviors. Main components of the architecture include: physical layer global workspace, mission-specific layer global workspace, core layer contextualization, attention, sensory prediction, status assessment, and goal management mechanisms.

CERA-CRANIUM was originally introduced in [17]. Most recent representative publications include [20, 18]. Currently, there exist two implementations of CERA-CRANIUM. One is oriented to the control of robots and based on CCR (Concurrency and Coordination Runtime) and DSS (Decentralized Software Services), part of Robotics Developer Studio (http://www.conscious-robots.com/en/robotics-studio/2.html). The latest implementation is mostly written in Java and has been applied to the control of computer game bots. This CERA-CRANIUM implementation is the winner of the 2K BotPrize 2010 competition, a Turing test adapted to video games [21].

Page 13: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

CERA Core Layer

CERA Mission Layer CERA Physical LayerSensor Service

Sensor Service

Single Percepts

CRANIUM Workspace

Complex Percepts

CRANIUM Workspace

Mission Percepts

Sensor Preprocessors

…Specialized Processors

Sensor Service

CERA S-M Agent

…Percept

Aggregators

Figure 5. A bird’s eye view of CERA-CRANIUM.

6.2. Support for Common Components and Features

The framework of CERA-CRANIUM supports the following features and components that are common for many cognitive architectures: working memory, procedural memory (some processors generate single or complex behaviors), perceptual memory (only as preprocessing buffers in CERA sensor services), cognitive map (mission-specific processors build 2D maps), reward system (status assessment mechanism in core layer), attention control and consciousness (attention is implemented as a bias signal induced from the core layer to the lower levels global workspaces).

Sensory, motor and other specific modalities include: visual input (both real cam and synthetic images from the simulator), and special modalities: SONAR, Laser Range Finder.

6.3. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with CERA-CRANIUM include Global Workspace Theory; multimodal sensory binding. Specifically, the following paradigms were modeled: problem solving (implicit), decision making. In addition, the following specific paradigms were modeled with this architecture: spatial exploration, learning and navigation.

7. CHREST

This section is based on the contribution of Fernand Gobet and Peter Lane to the Comparative Table of Cognitive Architectures.1

7.1. Overview

Knowledge and experiences are represented in CHREST using chunks and productions. Main components of the architecture include (Figure 6): Attention, Sensory Memory, Short-Term Memory, and Long-Term Memory.

CHREST was originally introduced in [22]. Other key references include [23]. Most recent representative publications include [24]. CHREST was implemented and tested / studied experimentally using Lisp and Java.

Page 14: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Figure 6. A bird’s eye view of CHREST.

7.2. Support for Common Components and Features

The framework of CHREST supports the following features and components that are common for many cognitive architectures: working memory (called short-term memory; auditory short-term memory and visuo-spatial short-term memory are implemented), semantic memory (implemented by the network of chunks in long-term memory), episodic memory (“episodic” links are used in some simulations), procedural memory (implemented by productions), iconic memory, perceptual memory, attention control and consciousness. Attention plays an important role in the architecture, as it for example determines the next eye fixation and what will be learnt.

Sensory, motor and other specific modalities include visual input (coded as a list of structures or arrays), auditory input (coded as spoken text segmented either as words, phonemes, or syllables), other natural language communications, etc.

7.3. Learning, Goal and Value Generation, and Cognitive Development

The architecture supports learning of new representations. Chunks and templates (schemata) are automatically and autonomously created as a function of the interaction of the input and the previous state of knowledge.

7.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigm of modeling studies with CHREST is learning. This includes implicit learning, verbal learning, acquisition of first language (syntax, vocabulary),

Page 15: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

development of expertise, memory formation, concept formation. Specifically, the following paradigms were modeled: problem solving (CHREST solves problems mostly by pattern recognition), decision making, language processing (only acquisition of language), implicit memory tasks.

In addition, the following specific paradigms were modeled with this architecture: visual perception with comprehension, spatial exploration, learning and navigation, object/feature search in an environment.

CHREST can learn from arbitrarily large databases. E.g., simulations on the acquisition of language have used corpora larger than 350k utterances. Simulations with chess have used databases with more than 10k positions. Discrimination networks with up to 300k chunks have been created. CHREST can adaptively fuse information from multiple types of sensors and modalities.

8. CLARION

This section is based on the contribution of Ron Sun to the Comparative Table of Cognitive Architectures.1

CLARION is based on a four-way division of implicit versus explicit knowledge and procedural versus declarative knowledge. In addition, CLARION addresses both top-down learning (from explicit to implicit knowledge) and bottom-up learning (from implicit to explicit knowledge). CLARION also addresses motivational and metacognitive processes underlying human cognition.

8.1. Overview

Knowledge and experiences are represented in CLARION within a implicit-explicit dichotomy: using chunks and rules (for explicit knowledge), and neural networks (for implicit knowledge). Main components of the architecture include (Figure 7): the action-centered subsystem (for procedural knowledge) and the non-action-centered subsystem (for declarative knowledge), each in both implicit and explicit forms (represented by rules and neural networks). In addition, there are the motivational subsystem and the metacognitive subsystem.

CLARION was originally introduced by Sun et al. in 1996 [25]. For a general overview, see [26]. For more in-depth explanations, see [27-29]. Most recent representative publications can be found at: http://www.cogsci.rpi.edu/~rsun/clarion.html. CLARION was implemented and tested / studied experimentally using Java. Funding programs and projects in which the architecture was supported include Office of Naval Research programs, Army Research Institute programs, etc.

8.2. Support for Common Components and Features

The framework of CLARION supports the following features and components that are common for many cognitive architectures but in different (more detailed) ways: semantic, episodic and procedural memory (each in both implicit and explicit forms, based on chunks, rules and neural networks), working memory (a separate structure), a reward system (in the form of a motivational subsystem and a meta-cognitive

Page 16: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

subsystem that determines rewards based on the motivational subsystem), as well as other motivational and meta-cognition modules.

Figure 7. A bird’s eye view of CLARION.

Supported cognitive functionality includes skill learning, reasoning, memory, both

top-down and bottom-up learning, metacognition, motivation, motivation-cognition interaction, self-awareness, emotion, consciousness, personality types, etc.

8.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in CLARION for implicit learning include: reinforcement learning, Hebbian learning, gradient descent methods (e.g., Backpropagation). For explicit learning, CLARION uses hypothesis testing rule learning and bottom-up rule learning (from implicit to explicit knowledge). The architecture supports learning of new representations (new chunks, new rules, new neural network representations) and furthermore, autonomous learning.

Page 17: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

8.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

The following paradigms were modeled with CLARION: skill acquisition, implicit learning, reasoning, problem solving, decision making, working memory tasks, memory tasks, metacognitive tasks, social psychology tasks, personality psychology tasks, motivational dynamics, social simulation, etc.

In addition, the following specific paradigms were modeled with this architecture: Tower of Hanoi/London, dual tasks, spatial tasks, learning and navigation, learning from instructions, trial-and-error learning, task switching, etc.

9. CogPrime

This section is based on the contribution of Ben Goertzel to the Comparative Table of Cognitive Architectures.1

Figure 8. A bird’s eye view of CogPrime.

9.1. Overview

From the point of view of knowledge representation, CogPrime is a multi-representational system. The core representation consists of hypergraphs with uncertain logical relationships and associative relations operating together. Procedures are stored as functional programs; episodes are stored in part as “movies” in a simulation engine. There are other specialized methods as well.

Page 18: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Main components of the architecture include the following (Figure 8). The primary knowledge store is the AtomSpace, a neural-symbolic weighted labeled hypergraph with multiple cognitive processes acting on it (in a manner carefully designed to manifest cross-process cognitive synergy), and other specialized knowledge stores indexed by it. The cognitive processes are numerous and include an uncertain inference engine (PLN, Probabilistic Logic Networks), a probabilistic evolutionary program learning engine (MOSES, developed initially by Moshe Looks), an attention allocation algorithm (ECAN, Economic Attention Networks, which is neural-net-like), concept formation and blending heuristics, etc. (Figure 8). Work is under way to incorporate a variant of Itamar Arel’s DeSTIN system as a perception and action layer. Motivation and emotion are handled via a variant of Joscha Bach’s MicroPsi framework called CogPsi.

Key references include [30,31] (see also http://opencog.org). The OpenCogPrime system implements CogPrime within the open-source OpenCog AI framework, see http://opencog.org. The implementation is mostly in C++ for Linux, some components are written in Java. A Scheme shell is used for interfacing the system.

9.2. Support for Common Components and Features

The framework of CogPrime supports the following features and components that are common for many cognitive architectures: working memory, semantic memory, episodic memory, procedural memory, perceptual memory (currently only for vision), cognitive map, reward system (Value Judgment processes compute cost, benefit, risk), attention control (can focus attention on regions and topics of interest) and consciousness (is aware of self in relation to the environment and other agents).

Sensory, motor and other specific modalities include: visual input (handled via interfacing with external vision processing tools), auditory input (speech, via an external speech-to-text engine), natural language communications, and special modalities (e.g., CogPrime can read text from the Internet).

9.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in CogPrime include: reinforcement learning, Bayesian update, Hebbian learning. The architecture supports learning of new representations (tested only in simple cases).

9.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with CogPrime include control of virtual-world agents and natural language processing. Specifically, the following paradigms were modeled: problem solving, decision making, analogical reasoning, and language processing: comprehension and generation.

10. CoJACK

This section is based on the contribution of Frank Ritter and Rick Evertsz to the Comparative Table of Cognitive Architectures.1

Page 19: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

10.1. Overview

Knowledge and experiences are represented in CoJACK using Beliefs-Desires-Intentions (BDI) architecture that handles events, plans and intentions (procedural memory), beliefsets (declarative memory) and activation levels. Main components of the architecture include (Figure 9): Beliefsets for long term memory, plans, intentions, events, goals.

Most recent representative publications include [32]. CoJACK was implemented and tested / studied experimentally using overlay to JACK®.

Figure 9. CoJACK architecture in the context of a synthetic environment.

10.2. Support for Common Components and Features

The framework of CoJACK supports the following features and components that are common for many cognitive architectures: working memory (active beliefs and intentions), semantic memory (encoded as weighted beliefs in beliefsets, uses ACT-R's declarative memory equations), episodic memory (not explicitly defined, but would be encoded as beliefs in beliefsets), procedural memory (plans and intentions with activation levels), perceptual memory (gets input from the world as events; these events are processed by plans), cognitive map reward system (uses ACT-R memory equations, so memories and plans get strengthened), attention control and

Page 20: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

consciousness (represented by transient events, goal structure, and intentions/beliefs whose activation level is above the threshold).

Sensory, motor and other specific modalities include visual input (depends on the particular model).

10.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in CoJACK include reinforcement learning (resulting in strengthening or weakening of plans).

10.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with CoJACK include BDI. Specifically, the following paradigms were modeled: problem solving, decision making, working memory tests. In addition, the following specific paradigms were modeled with this architecture: task switching, Tower of Hanoi/London.

11. Disciple

This section is based on the contribution of George Tecuci to the Comparative Table of Cognitive Architectures.1

Disciple is a general agent shell for building cognitive assistants that can learn subject matter expertise from their users, can assist them in solving complex problems in uncertain and dynamic environments, and can tutor students in expert problem solving.

11.1. Overview

As shown in the bottom part of Figure 10, the Disciple shell includes general modules for user-agent interaction, ontology representation, problem solving, learning, and tutoring, as well as domain-independent knowledge (e.g., knowledge for evidence-based reasoning). The problem solving engine of a Disciple cognitive assistant (see the top part of Figure 10) employs a general divide-and-conquer strategy, where complex problems are successively reduced to simpler problems, and the solutions of the simpler problems are successively combined into the solutions of the corresponding complex problems. To exhibit this type of behavior, the knowledge base of the agent contains a hierarchy of ontologies, as well as problem reduction rules and solution synthesis rules which are expressed with the concepts from the ontologies.

The most representative early paper on Disciple is [33]. Other key references include [34-37]. Most recent representative publications include [38,39]. Disciple was initially implemented in Lisp and is currently implemented in Java.

11.2. Support for Common Components and Features

The framework of Disciple supports features and components that are common for many cognitive architectures, including working memory (reasoning trees), semantic

Page 21: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

memory (ontologies), episodic memory (reasoning examples), and procedural memory (rules). Communication is based on natural language patterns learned from the user.

11.3. Learning, Goal and Value Generation, and Cognitive Development

An expert interacts directly with a Disciple cognitive assistant, to teach it to solve problems in a way that is similar to how the expert would teach a less experienced collaborator. This process is based on mixed-initiative problem solving (where the expert solves the more creative parts of a problem and the agent solves the more routine ones), integrated learning and teaching (where the expert helps the agent to learn by providing examples, hints and explanations, and the agent helps the expert to teach it by asking relevant questions), and multistrategy learning (where the agent integrates complementary learning strategies, such as learning from examples, learning from explanations, and learning by analogy, to learn general concepts and rules).

New paradigm for agent development

and maintenance

KBDisciple Agent Shell KB

Disciple Agent Shell KB

Disciple Agent Shell

Disciple↔Expert

Disciple Agent Shell

Figure 10. Disciple shell and Disciple cognitive assistants.

11.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Disciple agents have been developed for a wide variety of domains, including manufacturing [33], education [34], course of action critiquing [35], center of gravity determination [36,37], and intelligence analysis [38]. The most recent Disciple agents incorporate a significant amount of generic knowledge from the Science of Evidence, allowing them to teach and help their users in discovering and evaluating evidence and

Page 22: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

hypotheses, through the development of Wigmorean probabilistic inference networks that link evidence to hypotheses in argumentation structures that establish the relevance, believability and inferential force of evidence [39]. Disciple agents are used in courses at various institutions, including US Army War College, Joint Forces Staff College, Air War College, and George Mason University.

12. EPIC

This section is based on the contribution of Shane Mueller and Andrea Stocco to the Comparative Table of Cognitive Architectures.1

Figure 11. A bird’s eye view of EPIC.

12.1. Overview

Knowledge and experiences are represented in EPIC using production rules and working memory entries. Main components of the architecture include (Figure 11): cognitive processor (including production rule interpreter and working memory), long term memory, production memory, detailed perceptual-motor interfaces (auditory processor, visual processor, ocular motor processor, vocal motor processor, manual motor processor, tactile processor).

Page 23: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

EPIC was originally introduced in [40,41]. Other key references include [42]. EPIC (original version) was implemented and tested / studied experimentally using Common Lisp; EPIC-X was implemented in C++. Funding programs and projects in which the architecture was used were supported by the Office of Naval Research.

12.2. Support for Common Components and Features

The framework of EPIC supports the following features and components that are common for many cognitive architectures: working memory, procedural memory (production rules), iconic memory (part of visual perceptual processor), perceptual memory.

Sensory, motor and other specific modalities include visual input, auditory input, and motor output (including interaction of visual motor and visual perceptual processors, auditory and speech objects, and spatialized auditory information).

12.3. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with EPIC include simulation of human performance, multi-tasking, PRP Procedure, air traffic, verbal working memory tasks, visual working memory tasks.

In addition, the following specific paradigms were modeled with this architecture: Task switching, dual task, N-Back, visual perception with comprehension.

13. FORR

This section is based on the contribution of Susan L. Esptein to the Comparative Table of Cognitive Architectures.1

FORR (FOr the Right Reasons) is highly modular. It includes a declarative memory for facts and a procedural memory represented as a hierarchy of decision-making rationales that propose and rate alternative actions. FORR matches perceptions and facts to heuristics, and processes action preferences through its hierarchical structure, along with its heuristics’ real-valued weights. Execution affects the environment or changes declarative memory. Learning in FORR creates new facts and new heuristics, adjusts the weights, and restructures the hierarchy based on facts and on metaheuristics for accuracy, utility, risk, and speed.

13.1. Overview

Knowledge and experiences are represented in FORR using Descriptives (shared knowledge resources computed on demand and refreshed only when necessary), Advisors (domain-dependent decision rationales for actions), and Measurements (synopses of problem solving experiences). Main components of the architecture can be characterized as follows (Figure 12). Advisors are organized into three tiers. Tier-1 Advisors are fast and correct, recommend individual actions, and are consulted in a pre-specified order. Tier-2 Advisors trigger in the presence of a recognized situation, recommend (possible partially ordered) sets of actions, and are consulted in a pre-specified order. Tier-3 Advisors are heuristics, recommend individual actions, and are

Page 24: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

consulted together. Tier-3 Advisors’ opinions express preference strengths that are combines with weights during voting to select an action. A FORR-based system learns those weights from traces of its problem-solving behavior.

Figure 12. A bird’s eye view of FORR (reproduced with permission from [43]).

FORR was originally introduced in [44]. Other key references include [45]. Most recent representative publications include [43]. FORR was implemented and tested / studied experimentally using Common Lisp and Java. Funding programs, projects and environments in which the architecture was used were supported by National Science Foundation.

13.2. Support for Common Components and Features

The framework of FORR supports the following features and components that are common for many cognitive architectures: working memory, semantic memory (in many Descriptives), episodic memory (in task history and summary measurements), procedural memory, iconic memory (in some Descriptives), perceptual memory (Advisors can be weighted by problem progress, and repeated sequences of actions can be learned and stored), reward system (Advisor weights are acquired during self-supervised learning), attention control and consciousness (some Advisors attend to specific problem features).

Sensory, motor and other specific modalities include: auditory input (FORRSooth is an extended version that conducts human-computer dialogues in real time), natural language communications, etc.

Supported cognitive functionality includes metacognition, emotional intelligence and elements of personality (personality and emotion can be modeled through Advisors).

13.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in FORR include: reinforcement learning, Bayesian update, Hebbian learning (with respect to groupings of Tier-3 Advisors). The architecture supports learning of new representations (can learn new Advisors).

13.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with FORR include constraint solving, game playing, robot navigation, and spoken dialogue. Specifically, the following

Page 25: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

paradigms were modeled: problem solving, decision making, analogical reasoning (via pattern matching), language processing, metacognitive tasks. In addition, the following specific paradigms were modeled with this architecture: spatial exploration, learning and navigation, learning from instructions.

14. GLAIR

This section is based on the contribution of Stuart C. Shapiro to the Comparative Table of Cognitive Architectures.1

Figure 13. A bird’s eye view of GLAIR.

14.1. Overview

Knowledge and experiences are represented in GLAIR using SNePS, simultaneously a logic-based, assertional frame-based, and propositional graph-based representation. Main components of the architecture include (Figure 13): (1) Knowledge Layer (KL) containing Semantic Memory, Episodic Memory, Quantified and conditional beliefs, Plans for non-primitive acts, Plans to achieve goals, Beliefs about preconditions and effects of acts, Policies (Conditions for performing acts), Self-knowledge, Meta-knowledge; (2) Perceptuo-Motor Layer (PML) containing implementations of primitive actions, perceptual structures that ground KL symbols, deictic and modality registers; (3) Sensori-Actuator Layer containing sensor and effector controllers.

Page 26: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

GLAIR was introduced in [46], most recent representative publications include [47]. GLAIR was implemented and tested / studied experimentally using Common Lisp.

14.2. Support for Common Components and Features

The framework of GLAIR supports the following features and components that are common for many cognitive architectures: semantic memory (in SNePS), episodic memory (temporally-related beliefs in SNePS), procedural memory (in PML implemented in Lisp, could be compiled from KL).

Sensory, motor and other specific modalities include: visual input (perceptual structures in PML), auditory input (has been done using off-the-shelf speech recognition), special modalities (agents have used speech and navigation).

14.3. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with GLAIR include reasoning, belief change, reasoning without a goal. Specifically, the following paradigms were modeled: analogical reasoning, language processing, learning from instructions.

15. GMU BICA

This section is based on the contribution of Alexei Samsonovich to the Comparative Table of Cognitive Architectures.1 GMU BICA is a Biologically Inspired Cognitive Architecture developed at George Mason University.

15.1. Overview

Knowledge and experiences are represented in GMU BICA using schemas and mental states. Main components of the architecture (Figure 14, left) include five memory systems: working, semantic, episodic, procedural, iconic (Input/Output); plus cognitive map, reward system, and driving engine.

The distinguishing feature of GMU BICA is the dynamic system of mental states (Figure 14, right: [48]) that naturally enables various forms of metacognition [49].

Figure 14. Left: A bird’s eye view of GMU BICA. Right: a typical snapshot of working memory. Shaded boxes represent mental states. Their number, labels, contents and relations are dynamical variables.

Page 27: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

GMU BICA was originally introduced in [50]. Other key references include [48,49,51]. GMU BICA was implemented and tested / studied experimentally at GMU using Matlab, Python, Lisp, Java, and various visualization tools. Support for funding programs, projects and environments in which the architecture was used includes DARPA and GMU Center for Consciousness and Transformation.

15.2. Support for Common Components and Features

The framework of GMU BICA supports the following features and components that are common for many cognitive architectures: working memory (includes active mental states), semantic memory (includes schemas), episodic memory (includes inactive mental states aggregated into episodes), procedural memory (includes primitives), iconic memory (input-output buffer), cognitive map, reward system, attention control (the distribution of attention is described by a special attribute in instances of schemas) and consciousness (can be identified with the content of the mental state I-Now).

Sensory, motor and other specific modalities include: visual input (symbolic), auditory input (textual commands), motor output (high-level), imagery (mental states I-Imagined), spatial cognition (spatial learning and pathfinding), etc.

Supported cognitive functionality includes metacognition (e.g., via mental states I-Meta), self-awareness (a mental state can be self-reflective and can reflect / operate on other mental states), self-regulated learning [48], etc.

15.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in GMU BICA include reinforcement learning (specifically, a version of temporal difference learning). In addition, the architecture supports emergence and learning of new representations (schemas and mental states) from own experience, from imagery, from observation of examples, from instruction, from guided/scaffolded task execution, from an interactive dialogue with instructor.

15.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with GMU BICA include voluntary perception, cognition and action. Specifically, the following paradigms were modeled with this architecture: problem solving, decision making (in navigation), visual perception with comprehension, perceptual illusions (Necker cube), metacognitive tasks (in navigation), spatial exploration, learning and navigation, object/feature search in an environment. Self-regulated learning, learning from instructions and pretend-play paradigms were studied at a level of meta-simulation.

16. HTM

This section is based on the contribution of Jeff Hawkins (with the assistance of Donna Dubinsky) to the Comparative Table of Cognitive Architectures.1 HTM stands for Hierarchical Temporal Memory.

Page 28: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

16.1. Overview

Knowledge and experiences are represented in HTM using sparse distributed representations. Main components: HTM is a biologically constrained model of neocortex and thalamus. HTM models cortex related to sensory perception, learning to infer and predict from high dimensional sensory data. The model starts with a hierarchy of memory nodes. Each node learns to pool spatial pattern using temporal contiguity (using variable order sequences if appropriate) as the teacher. HTMs are inherently modality independent. Biologically, the model maps to cortical regions, layers of cells, columns of cell across the layers, inhibitory cells, and non-linear dendrite properties. All representations are large sparse distributions of cell activities.

HTM was originally introduced in [52]. Other key references are listed at www.numenta.com. Most recent representative publications include [53] (see also http://en.wikipedia.org/wiki/Hierarchical_temporal_memory). HTM was implemented and tested / studied experimentally using NuPIC development environment available for PC and Mac. It is available under a free research license and a paid commercial license from Numenta.

16.2. Support for Common Components and Features

The framework of HTM supports the following features and components that are common for many cognitive architectures: semantic memory (semantic meaning can be encoded in sparse distributed representations), attention control and consciousness (modeled covert attentional mechanisms within HTMs).

HTMs have been applied to vision, audition, network sensors, power systems, and other tasks.

16.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in HTM include: Bayesian update (HTM hierarchies can be understood in a belief propagation/Bayesian framework), Hebbian learning, learning of new representations.

16.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Specifically, the paradigms of visual perception with comprehension were modeled using this architecture.

17. Leabra

This section is based on the contribution of David C. Noelle to the Comparative Table of Cognitive Architectures.1

17.1. Overview

Knowledge and experiences are represented in Leabra using patterns of neural firing rates and patterns of synaptic strengths. Sensory events drive patterns of neural

Page 29: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

activation, and such activation-based representations may drive further processing and the production of actions. Knowledge that is retained for long periods is encoded in patterns of synaptic connections, with synaptic strengths determining the activation patterns that arise when knowledge or previous experiences are to be employed. Main components: at the level of gross functional anatomy, most Leabra models employ a tripartite view of brain organization. The brain is coarsely divided into prefrontal cortex, the hippocampus and associated medial-temporal areas, and the rest of cortex – "posterior" areas. Prefrontal cortex provides mechanisms for the flexible retention and manipulation of activation-based representations, playing an important role in working memory and cognitive control. The hippocampus supports the rapid weight-based learning of sparse conjunctive representations, providing central mechanisms for episodic memory. The posterior cortex mostly utilizes slow statistical learning to shape more automatized cognitive processes, including sensory-motor coordination, semantic memory, and the bulk of language processing. At a finer level of detail, other components regularly appear in Leabra-based models. Activation-based processing depends on attractor dynamics utilizing bidirectional excitation between brain regions. Fast pooled lateral inhibition plays a critical role in shaping neural representations. Learning arises from an associational Hebbian component, a biologically plausible error-driven learning component, and a reinforcement learning mechanism dependent on the brain's dopamine system.

Leabra was originally introduced in [54]. Most recent representative publications include [55,56]. LEABRA was implemented and tested / studied experimentally using Emergent: open-source software written largely in C++.

17.2. Support for Common Components and Features

The framework of Leabra supports the following features and components that are common for many cognitive architectures: working memory (many Leabra working memory models have been published, mostly focusing on the role of the prefrontal cortex in working memory), semantic memory (many Leabra models of the learning and use of semantic knowledge, abstracted from the statistical regularities over many experiences, have been published, including some language models), episodic memory (many Leabra models of episodic memory have been published, mostly focusing on the role of the hippocampus in episodic memory), and procedural memory: a fair number of Leabra models of automatized sequential action have been produced, with a smaller number specifically addressing issues of motor control. Most of these models explore the shaping of distributed patterns of synaptic strengths in posterior brain areas in order to produce appropriate action sequences in novel situations. Some work on motor skill automaticity has been done. A few models, integrating prefrontal and posterior areas, have focused on the application of explicitly provided rules. Leabra supports cognitive map: Leabra contains the mechanisms necessary to self-organize topographic representations. These have been used to model map-like encodings in the visual system. At this time, it is not clear that these mechanisms have been successfully applied to spatial representation schemes in the hippocampus. Leabra supports reward system: Leabra embraces a few alternative models of the reward-based learning systems dependent on the mesolimbic dopamine systems, including a neural implementation of temporal difference (TD) learning, and, more recently, the PVLV algorithm. Models have been published involving these mechanisms, as well as interactions between dopamine, the amygdala, and both lateral and orbital areas of

Page 30: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

prefrontal cortex. Leabra supports iconic memory: In Leabra, iconic memory can result from activation-based attractor dynamics or from small, sometimes transient, changes in synaptic strength, including mechanisms of synaptic depression. Imagery naturally arises from patterns of bidirectional excitation, allowing for top-down influences on sensory areas. Little work has been done, however, in evaluating Leabra models of these phenomena against biological data. Leabra supports perceptual memory: different aspects of perceptual memory can be supported by activation-based learning, small changes in synaptic strengths, frontally-mediated working memory processes, and rapid sparse coding in the hippocampus. Leabra supports attention control and consciousness: In Leabra, attention largely follows a biased competition approach, with top-down activity modulating a process that involves lateral inhibition. Lateral inhibition is a core mechanism in Leabra, as is the bidirectional excitation needed for top-down modulation. Models of spatial attention have been published, including models that use both covert shifts in attention and eye movements in order to improve object recognition and localization. Published models of the role of prefrontal cortex in cognitive control generally involve an attention-like mechanism that allows frontally maintained rules to modulate posterior processing. Virtually no work has been done on consciousness in the Leabra framework, though there is some work currently being done on porting the Mathis and Mozer account of visual awareness into Leabra.

Sensory, motor and other specific modalities include: visual input (an advanced Leabra model of visual object recognition has been produced which receives photographic images as input), auditory input (while a few exploratory Leabra models have taken low-level acoustic features as input, this modality has not yet been extensively explored), spatial cognition, etc.

17.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in Leabra include reinforcement learning, Bayesian update, Hebbian learning, and learning of new representations. Reinforcemenent learning: Leabra embraces a few alternative models of the reward-based learning systems dependent on the mesolimbic dopamine systems, including a neural implementation of temporal difference (TD) learning, and, more recently, the PVLV algorithm. Models have been published involving these mechanisms, as well as interactions between dopamine, the amygdala, and both lateral and orbital areas of prefrontal cortex. Bayesian Update: While Leabra does not include a mechanism for updating knowledge in a Bayes-optimal fashion based on singular experiences, its error-driven learning mechanism does approximate maximum a posteriori outputs given sufficient iterated learning. Hebbian Learning: An associational learning rule, similar to traditional Hebbian learning, is one of the core learning mechanisms in Leabra. Gradient Descent Methods (e.g., Backpropagation): A biologically plausible error-correction learning mechanism, similar in performance to the generalized delta rule but dependent upon bidirectional excitation to communicate error information, is one of the core learning mechanisms in Leabra. Learning of new representations: All active representations in Leabra are, at their core, patterns of neural firing rates.

17.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with Leabra include learning. Specifically, the following paradigms were modeled: decision making (much work has been done on

Page 31: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Leabra modeling of human decision making in cases of varying reward and probabilistic effects of actions, focusing on the roles of the dopamine system, the norepinepherine system, the amygdala, and orbito-frontal cortex), analogical reasoning (some preliminary work has been done on using dense distributed representations in Leabra to perform analogical mapping), language processing (many Leabra language models have been produced, focusing on both word level and sentence level effects), working memory tests (Leabra models of the prefrontal cortex have explored a variety of working memory phenomena).

In addition, the following specific paradigms were modeled with this architecture: Stroop task, task switching, N-Back, visual perception with comprehension (a powerful object recognition model has been constructed), spatial exploration, learning and navigation, object/feature search in an environment (object localization naturally arises in the object recognition model), learning from instructions (some preliminary work on instruction following, particularly in the domain of classification instructions, has been done in Leabra).

18. LIDA

This section is based on the contribution of Stan Franklin to the Comparative Table of Cognitive Architectures.1

18.1. Overview

Knowledge and experiences are represented in LIDA as follows: perceptual knowledge – using nodes and links in a Slipnet-like net with sensory data of various types attached to nodes; episodic knowledge – using Boolean vectors (Sparse Distributed Memory; procedural knowledge – using schemes a la Schema Mechanism. Cognitive cycle (action-perception cycle) in LIDA acts as a cognitive atom. Higher level, multi-cyclic processes are implemented as behavior streams. Main components involved in the cognitive cycle (Figure 15) include sensory memory, perceptual associative memory, workspace, transient episodic memory, declarative memory, global workspace, procedural memory, action selection, sensory motor memory.

LIDA was originally introduced in [57]. Other key references include [58]. Most recent representative publications include [59].

18.2. Support for Common Components and Features

The framework of LIDA supports the following features and components that are common for many cognitive architectures: working memory (explicitly included as a workspace with significant internal structure including a current situational model with both real and virtual windows, and a conscious contents queue), semantic memory (implemented automatically as part of declarative memory via sparse distributed memory), episodic memory (both declarative memory and transient episodic memory encoded via sparse distributed memory), procedural memory (schemas a la Drescher), iconic memory (defined using various processed pixel matrices), perceptual memory (semantic net with activation passing; nodes may have sensory data of various sorts attached), reward system (feeling and emotion nodes in perceptual associative memory), attention control and consciousness (implemented a la Global Workspace Theory [19]

Page 32: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

with global broadcasts recruiting possible actions in response to the current contents and, also, modulating the various forms of learning).

Declarative Memory

Transient EpisodicMemory

Workspace

Attention Memory

MotorMemory

External Stimulus

Internal Stimulus

Perceptual Associative Memory

Procedural Memory

Action Selection

Move Percept

CueCue

Local Associations

Local Associations

FormCoalitions

Move Coalitions

Conscious BroadcastInstantiateschemes

Attentional Learning

Episodic Learning

Procedural Learning

Perceptual Codelets

Global Workspace

Action Selected

Action Executed

Sensory Memory

Environment

Consolidation

Structural Memory

Structural Learning

FormStructures

Figure 15. Main architectural components and the cognitive cycle of LIDA.

18.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in LIDA include reinforcement learning and learning of new representations (for perception, episodic and procedural memories via base-level activation).

18.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with LIDA include Global Workspace Theory paradigms [19]. Specifically, the following paradigms were modeled: problem solving, decision making, language processing, working memory tests.

19. NARS

This section is based on the contribution of Pei Wang to the Comparative Table of Cognitive Architectures.1

Though NARS can be considered as a cognitive architecture in a broad sense, it is very different from the other systems. Theoretically, NARS is a normative theory and model of intelligence and cognition as adaptation with insufficient knowledge and

Page 33: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

resources, rather than a direct simulation of human cognitive behaviors, capabilities, or functions; technically, NARS uses a unified reasoning mechanism on a unified memory for learning, problem-solving, etc., rather than integrates different techniques in an architecture. Therefore, accurately speaking, it is not after the same goal as many other cognitive architectures, though still related to them in various aspects.

19.1. Overview

Knowledge and experiences are represented in NARS using beliefs, tasks, and concepts. Main components of the architecture include an inference engine and an integrated memory and control mechanism (Figure 16). The cognitive cycle can be described as follows (see labels in Figure 16). 1: Input tasks are added into the task buffer. 2: Selected tasks are inserted into the memory. 3: Inserted tasks in memory may also produce beliefs and concepts, as well as change existing ones. 4: In each working cycle, a task and a belief are selected from a concept, and feed to the inference engine as premises. 5: The conclusions derived from the premises by applicable rules are added into the buffer as derived tasks. 6: Selected derived tasks are reported as output tasks.

Figure 16. Main components and the cognitive cycle of NARS.

For key references, see http://sites.google.com/site/narswang/home. Most recent representative publications include [60]. NARS was implemented using open source in Java and Prolog.

Page 34: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

19.2. Support for Common Components and Features

The framework of NARS supports the following features and components that are common for many cognitive architectures: working memory (the active part of the memory), semantic memory (the whole memory is semantic), episodic memory (the part of memory that contains temporal information), procedural memory (the part of memory that is directly related to executable operations), cognitive map (as part of the memory), reward system (experience-based and context-sensitive evaluation).

20. NEXTING

This section is based on the contribution of Akshay Vashist and Shoshana Loeb to the Comparative Table of Cognitive Architectures.1

20.1. Overview

Knowledge and experiences are represented in NEXTING using Facts, Rules, frames (learned/ declared), symbolic representation of raw sensory inputs, expectation generation and matching. Main components of the architecture include (Figure 17): Learning, Reasoning, Imagining, Attention Focus, Time awareness, Expectation generation and matching. Most recent representative publications include [61]. NEXTING was implemented using C++, Java, Perl, and Prolog.

Figure 17. A bird’s eye view of NEXTING.

20.2. Support for Common Components and Features

The framework of NEXTING supports the following features and components that are common for many cognitive architectures: working memory, semantic memory (frames), episodic memory (implicit), procedural memory (implicit), attention control (via expectation generation and matching).

Page 35: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

20.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in NEXTING include Bayesian update, learning of new representations (capable of representing newly learned knowledge).

20.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with NEXTING include expectation generation and matching via learning and reasoning on stored knowledge and sensory inputs. Specifically, the following paradigms were modeled: decision making (using inference: both statistical and logical), analogical reasoning (in a limited sense), and language processing. In addition, the following specific paradigms were modeled with this architecture: task switching (a necessary feature for nexting), visual perception with comprehension, object/feature search in an environment, learning from instructions.

21. Pogamut

This section is based on the contribution of Cyril Brom to the Comparative Table of Cognitive Architectures.1

21.1. Overview

Knowledge and experiences are represented in Pogamut using rules and operators. Main components of the architecture include (Figure 18): procedural knowledge encoded as reactive rules; episodic and spatial memory encoded within a set of graph-based structures.

Key references to Pogamut include [62,63]. Most recent representative publication is [64]. Pogamut was implemented and tested / studied experimentally using Java and Python. Implementations featured ACT-R binding and Emergent binding.

21.2. Support for Common Components and Features

The framework of Pogamut supports the following features and components that are common for many cognitive architectures: working memory (declarative), episodic memory (declarative or a spreading activation network), procedural memory (rules), perceptual memory (represents a couple of objects recently perceived by the agent), cognitive map (graph-based and Bayesian).

Sensory, motor and other specific modalities include visual input: both symbolic and subsymbolic (tailored for special purposes).

21.3. Learning, Goal and Value Generation, and Cognitive Development

The episodic memory uses Hebbian learning. In general, the architecture supports unsupervised learning and can learn in real time, with respect to spatial and episodic memory.

Page 36: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Unreal Tournament

GameBots

Pogamut Agent

GavialibTCP/IP Netbeanswith Pogamut plugin

JMX

A

Figure 18. A: A bird’s eye view of Pogamut (see [62] for details). B: A snapshot of Pogamut.

21.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with Pogamut include 3D virtual worlds. Specific paradigms include problem solving (with Prolog binding), decision making, object/feature search in an environment, spatial exploration, learning and navigation.

22. Polyscheme

This section is based on the contribution of Nick Cassimatis to the Comparative Table of Cognitive Architectures.1

22.1. Overview

Knowledge and experiences are represented in Polyscheme using relational constraints, constraint graphs, first-order literals, taxonomies, and weight matrixes. Main

Page 37: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

components of the architecture include modules using data structures and algorithms specialized for specific concepts (Figure 19). A focus of attention is used for exchanging information among these modules. A focus manager is used for guiding the flow of attention and thus inference. Specialized modules are used for representing and making inferences about specific concepts.

Polyscheme was originally introduced in [65]. Most recent representative publications include [66]. Polyscheme was implemented and tested / studied experimentally using Java.

Figure 19. A bird’s eye view of Polyscheme.

22.2. Support for Common Components and Features

The framework of Polyscheme supports the following features and components that are common for many cognitive architectures: working memory (each specialist implement its own), semantic memory (each specialist implement its own), episodic memory (each specialist implement its own), procedural memory (each specialist implement its own), iconic memory (there is a spatial specialist that has some of this functionality), perceptual memory (there is a spatial specialist that has some of this functionality), cognitive map (there is a spatial specialist that has this functionality), reward system, and attention control.

22.3. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with Polyscheme include reasoning and model finding. Specifically, the following paradigms were modeled: problem solving, language processing, and pretend-play (in a limited sense).

Page 38: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

23. Recommendation Architecture (RA)

This section is based on the contribution of L. Andrew Coward to the Comparative Table of Cognitive Architectures.1

Theoretical arguments indicate that any system which must learn to perform a large number of different behaviors will be constrained into this recommendation architecture form by a combination of practical requirements including the need to limit information handling resources, the need to learn without interference with past learning, the need to recover from component failures and damage, and the need to construct the system efficiently.

Figure 20. A bird’s eye view of Recommendation Architecture.

23.1. Overview

Knowledge and experiences are represented in RA using a large set of heuristically defined similarity circumstances, each of which is a group of information conditions that are similar and have tended to occur at the same time in past experience. One similarity circumstance does not correlate unambiguously with any one cognitive category, but each similarity circumstance is associated with a range of recommendation weights in favor of different behaviors (such as identifying categories in current experience). The predominant weight across all currently detected similarity circumstances is the accepted behavior. Main components of the architecture include

Page 39: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

(Figure 20): Condition definition and detection (cortex); Selection of similarity circumstances to be changed in each experience (hippocampus); Selection of sensory and other information to be used for current similarity circumstance detection (thalamus); Assignment and comparison of recommendation weights to determine current behavior (basal ganglia); Reward management to change recommendation weights (nucleus accumbens etc.); Management of relative priority of different types of behavior (amygdala and hypothalamus); Recording and implementation of frequently required behavior sequences (cerebellum).

RA was originally introduced in [67]. Other key references include [68,69] (see also http://cs.anu.edu.au/~Andrew.Coward/References.html). Most recent representative publications include [70,71]. RA was implemented and tested / studied experimentally using Smalltalk.

23.2. Support for Common Components and Features

The framework of RA supports the following features and components that are common for many cognitive architectures: working memory (frequency modulation placed on neuron spike outputs, with different modulation phase for different objects in working memory), semantic memory (similarity circumstances, or cortical column receptive fields, that are often detected at the same time acquire ability to indirectly activate each other), episodic memory (similarity circumstances, or cortical column receptive fields, that change at the same time acquire ability to indirectly activate each other; an episodic memory is indirect activation of a group of columns that all changed at the same time at some point in the past; because the hippocampal system manages the selection of cortical columns that will change in response to each sensory experience, information used for this management is applicable to construction of episodic memories), procedural memory (recommendation weights associated in the basal ganglia with cortical column receptive field detections instantiate procedural memory), iconic memory (maintained on the basis of indirect activation of cortical columns recently active at the same time), perceptual memory (based on indirect activation of cortical columns on the basis of recent simultaneous activity), reward system (some receptive field detections are associated with recommendations to increase or decrease recently used behavioral recommendation weights), attention control and consciousness.

In this architecture, cortical columns have receptive fields defined by groups of similar conditions that often occurred at the same time in past sensory experiences, and are activated if the receptive field occurs in current sensory inputs. Attention is selection of a subset of currently detected columns to be allowed to communicate their detections to other cortical areas. The selection is on the basis of recommendation strengths of active cortical columns, interpreted through the thalamus, and is implemented by placing a frequency modulation on the action potential sequences generated by the selected columns. Consciousness is a range of phenomena involving pseudosensory experiences. A column can also be indirectly activated if it was recently active, or often active in the past, or if it expanded its receptive field at the same time as a number of currently active columns. Indirect activations lead to “conscious experiences”.

Sensory, motor and other specific modalities in RA include visual input (implemented by emulation of action potential outputs of populations of simulated

Page 40: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

sensory neurons), auditory input (implemented by emulation of action potential outputs of populations of simulated sensory neurons).

23.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in RA include: reinforcement learning (increases in recommendation weights associated in the basal ganglia with cortical column receptive field detections, on the basis of rewards), Hebbian learning (with an overlay management that determines whether Hebbian learning will occur at any point in time), learning of new representations (a new representation is a new subset of receptive field detections, with slight changes to some receptive fields).

23.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms: all cognitive processes are implemented through sequences of receptive field activations, including both direct detections and indirect activations. At each point in the sequence the behavior with the predominant recommendation weight across the currently activated receptive field population is performed. This behavior may be to focus attention on a particular subset of current sensory inputs or to implement a particular type of indirect activation (prolong current activity, or indirectly activate on the basis of recent simultaneous activity, past frequent simultaneous activity, or past simultaneous receptive field change). Recommendation weights are acquired through rewards that result in effective sequences for cognitive processing. Frequently used sequences are recorded in the cerebellum for rapid and accurate implementation.

Specifically, the following paradigms were modeled: problem solving (a simple example is fitting together two objects; first step is activating receptive fields often active in the past shortly after fields directly activated by one object were active; because objects have often been seen in the past in several different orientations, this indirect activation is effectively a “mental rotation”; receptive fields combining information from the indirect activation derived from one object and the direct activation from the other object recommend movements to fit the objects together; a bias is placed upon acceptance of such behaviors by taking on the task), decision making (there can be extensive indirect activation steps, with slight changes to receptive fields at each step; eventually, one behavior has a predominant recommendation strength in the basal ganglia, and this behavior is the decision), language processing, working memory tests, perceptual illusions, implicit memory tasks (depend on indirect receptive field activations on the basis of recent simultaneous activity).

In addition, the following specific paradigms were modeled with this architecture: task switching, dual task, visual perception with comprehension, spatial exploration, learning and navigation, object/feature search in an environment, learning from instructions.

24. REM

This section is based on the contribution of Ashok K. Goel, J. William Murdock, and Spencer Rugaber to the Comparative Table of Cognitive Architectures.1

Page 41: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

24.1. Overview

Knowledge and experiences are represented in REM using tasks, methods, assertions, and traces. Main components of the architecture include (Figure 21): Task-Method-Knowledge (TMK) models provide functional models of what agents know and how they operate. They describe components of a reasoning process in terms of intended effects, incidental effects, and decomposition into lower-level components. Tasks include the requirements and intended effects of some computation. Methods implement a task and include a state-transition machine in which transitions are accomplished by subtasks.

REM was originally introduced in [72]. Other key references include [73,74]. Most recent representative publications include [75]. REM was implemented and tested / studied experimentally using Common Lisp, with Loom as the underlying knowledge-engine; a Java version is in development. Funding programs, projects and environments in which the architecture was used include the NSF Science of Design program, the Self Adaptive Agents project, and Turn-based strategy games.

Figure 21. A bird’s eye view of REM.

24.2. Support for Common Components and Features

The framework of REM supports the following features and components that are common for many cognitive architectures: working memory (no commitment to a specific theory of working memory), semantic memory (uses Powerloom as underlying knowledge engine; OWL based ontological representation), episodic memory (defined as traces through the procedural memory), procedural memory (defined as tasks, which are functional elements, and methods, which are behavioral elements), iconic memory (spatial relationships are encoded using the underlying knowledge engine), reward system (functional descriptions of tasks allow agents to determine success or failure of those tasks by observing the state of the world; success or failure can then be used to reinforce decisions made during execution), attention control and consciousness:

Page 42: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

implicit in the fact that methods constrain how and when subtasks may be executed (via a state-machine); each method can be in only one state at a time, corresponding to the attended-to portion of a reasoning task (and directly linked to the attended-to knowledge via the tasks requirements and effects).

24.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in REM include: reinforcement learning (learns criteria for selecting alternative methods for accomplishing tasks, and also alternative transitions within a methods state-transition machine), learning of new representations (learns refinements of methods for existing tasks and can respond to a specification of some new task by adapting the methods for some similar task).

24.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with REM include Reflection: adaptation in response to new functional requirements; also Planning and Reinforcement learning. Specifically, the following paradigms were modeled: problem solving, decision making (specifically in selecting methods to perform a task and selecting transitions within a method), analogical reasoning (using the functional specification - requirements and effects - of tasks), and metacognitive tasks.

In addition, the following specific paradigms were modeled with this architecture: Tower of Hanoi/London, spatial exploration, learning and navigation, learning from instructions (from new task specifications).

25. SOAR

This section is based on the contribution of John Laird and Andrea Stocco to the Comparative Table of Cognitive Architectures.1

25.1. Overview

Knowledge and experiences are represented in SOAR using rules (procedural knowledge), relational graph structure (semantic knowledge), and episodes of relational graph structures (episodic memory). Main components (Figure 22) are: working memory (encoded as a graph structure; knowledge is represented as rules organized as operators); semantic memory; episodic memory; mental imagery; reinforcement learning.

SOAR was originally introduced in [76,77]. Most recent representative publications include [78]. SOAR was implemented in C (with interfaces to almost any language) and Java. Under general framework, Soar uses the Problem Space Computational Model.

25.2. Support for Common Components and Features

The framework of SOAR supports the following features and components that are common for many cognitive architectures: working memory (relational graph structure),

Page 43: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

semantic memory (relational graph structure), episodic memory (encoded as graph structures: snapshots of working memory), procedural memory (rules), iconic memory (explicitly defined), reward system (appraisal-based reward as well as user-defined internal/external reward), attention control and consciousness.

Sensory, motor and other specific modalities include: visual input (propositional or relational), auditory input (support for text-based communication).

Figure 22. Extended SOAR (see details in [78]).

25.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in SOAR include: reinforcement learning (for operators: SARSA/Q-learning), and learning of new representations: chunking (forms new rules); also mechanisms to create new episodes, new semantic memories.

25.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with SOAR include problem solving, decision making, analogical reasoning (limited), language processing, and working memory tasks.

The following specific paradigms were modeled with this architecture: Tower of Hanoi/London, dual task, spatial exploration, learning and navigation (implemented but not compared to human behavior), learning from instructions (implemented [79] but not compared to human behavior).

Page 44: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Recent extensions of Soar [78] (Figure 22) support episodic memory, visual imagery (as a special modality) and emotions, among other forms of higher cognition.

26. Ymir

This section is related to the contribution of Kristinn Thórisson to the Comparative Table of Cognitive Architectures.1

26.1. Overview

Knowledge and experiences are represented in Ymir using distributed modules with traditional Frames. Main components of the architecture (Figure 23) include a network of distributed heterogeneous modules interacting via blackboards.

Figure 23. A bird’s eye view of Ymir.

Ymir was originally introduced in [80,81]. Other key references include [82-85].

Most recent representative publications include [86]. Further details can be found at http://alumni.media.mit.edu/~kris/ymir.html. Ymir was implemented and tested / studied experimentally using CommonLisp, C, C++, 8 networked computers, sensing hardware.

26.2. Support for Common Components and Features

The framework of Ymir supports the following features and components that are common for many cognitive architectures: working memory (including Functional

Page 45: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

Sketchboard, Content Blackboard, Motor Feedback Blackboard, and Frames), semantic memory (Frames), episodic memory (Functional Sketchboard, Content Blackboard, Motor Feedback Blackboard), procedural memory (Frames, Action Modules: limited implementation), cognitive map (limited body-centric spatial layout of selected objects), perceptual memory: perceptual representations at multiple levels (complexity) and timescales (see “visual input” and “auditory input” below). Higher cognitive features include a controlled within-utterance attention span and a situated spatial model of embodied self (but no semantic representation of self that could be reasoned over is used in Ymir).

Sensory, motor and other specific modalities include: visual input (temporally and spatially accurate vector model of upper human body, including hands, fingers, one eye. Via body-tracking suit, gloves and eyetracker), auditory input (speech recognition: BBN Hark; custom real-time prosody tracker with H* and L* detection), special modalities (multimodal integration and real-time multimodal communicative act interpretation).

26.3. Learning, Goal and Value Generation, and Cognitive Development

Learning algorithms used in Ymir include reinforcement learning (in a recent implementation of RadioShowHost).

26.4. Cognitive Modeling and Application Paradigms, Scalability and Limitations

Main general paradigms of modeling studies with Ymir include integrated behavior-based and classical AI and blackboards. Specifically, the following paradigms were modeled: decision making (using hierarchical decision modules as well as traditional planning methods), language processing, working memory tasks, task switching, visual perception with comprehension.

The architecture can function autonomously, addresses the brittleness problem, can pay attention to valued goals, can flexibly switch attention between unexpected challenges and valued goals, can adaptively fuse information from multiple types of sensors and modalities.

27. Concluding Remarks

The material included here is far from being complete. This review is a first step that will have follow-ups. Neither the list of architectures presented here nor the description of each individual architecture can be considered comprehensive or close to complete at this point. The present publication does not aim to explain the ideas behind each design (this will be done elsewhere), and merely documents the fact of co-existence of a large variety of cognitive architectures, models, frameworks, etc. that have many features in common and need to be studied in connection to each other.

Many cognitive architectures are not included here, because their data are missing in the online Comparative Table.1 Their names include: Icarus, SAL, Tosca, 4CAPS, AIS, Apex, Atlantis, CogNet, Copycat, DUAL, Emotion Machine, ERE, Gat, Guardian, H-Cogaff, Homer, Imprint, MAX, Omar, PRODIGY, PRS, Psi-Theory, R-CAST, RALPH-MEA, Society of Mind, Subsumption Architecture, Teton, Theo, and many

Page 46: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

more (e.g., see http://Cogarch.org, http://en.wikipedia.org/wiki/Cognitive_architectures, http://www.cogsci.rpi.edu/~rsun/arch.html, http://bicasymposium.com/cogarch/).

It is tempting to make generalizing observations and conclusions based on the presented material; however, this will be done elsewhere, based on a more complete dataset. It is amazing to see how many cognitive architectures (to be more precise, cognitive architecture families) co-exist at present: 54 are mentioned here, of which 26 are described, and there are maybe hundreds of them in the literature. It is also amazing to see how many of them (virtually all) take their origin from biological inspirations, and how similar different approaches are in their basic foundations: as if they were copied from each other. It is also remarkable how many modern frameworks tend to be advanced and up to the state of the art, incorporating higher cognitive features and functions such as episodic memory, emotions, metacognition, imagery, etc., and (hopefully, in a near future) a human-like self capable of cognitive growth.

27.1. Acknowledgments

This publication is based on the publicly available Comparative Table of Cognitive Architectures (the version that was available at http://bicasymposium.com/cogarch on September 20, 2010). This continuously updated online resource is a collective product of many independent contributions: researchers and developers of cognitive architectures. All contributors gave me permission to use their contributions in this publication, and I am grateful to them for their help during preparation of this review. Their names include, alphabetically: James S. Albus, Raul Arrabales, Cyril Brom, Nick Cassimatis, Balakrishnan Chandrasekar, Andrew Coward, Susan L. Epstein, Rick Evertsz, Stan Franklin, Fernand Gobet, Ashok Goel, Ben Goertzel, Stephen Grossberg, Jeff Hawkins, Unmesh Kurup, John Laird, Peter Lane, Christian Lebiere, Shoshana Loeb, Shane Mueller, William Murdock, David C. Noelle, Frank Ritter, Brandon Rohrer, Spencer Rugaber, Alexei Samsonovich, Stuart C. Shapiro, Andrea Stocco, Ron Sun, George Tecuci, Akshay Vashist, Pei Wang. Their contributions to the online resource were used here.

References

[1] Azevedo, R., Bench-Capon, T., Biswas, G., Carmichael, T., Green, N., Hadzikadic, M., Koyejo, O., Kurup, U., Parsons, S., Pirrone, R., Prakken, H., Samsonovich, A., Scott, D., and Souvenir, R. (2010). Reports on the AAAI 2009 Fall Symposia. AI Magazine 31 (1): 88-94.

[2] Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harward University Press. [3] SIGArt, (1991). Special section on integrated cognitive architectures. Sigart Bulletin, 2(4). [4] Pew, R. W., and Mavor, A. S. (Eds.). (1998). Modeling Human and Organizational Behavior:

Application to Military Simulations. Washington, DC: National Academy Press. books.nap.edu/catalog/6173.html.

[5] Ritter, F. E., Shadbolt, N. R., Elliman, D., Young, R. M., Gobet, F., and Baxter, G. D. (2003). Techniques for Modeling Human Performance in Synthetic Environments: A Supplementary Review. Wright-Patterson Air Force Base, OH: Human Systems Information Analysis Center (HSIAC).

[6] Gluck, K. A., and Pew, R. W. (Eds.). (2005). Modeling Human Behavior with Integrated Cognitive Architectures: Comparison, Evaluation, and Validation. Mahwah, NJ: Erlbaum.

[7] Gray, W. D. (Ed.) (2007). Integrated Models of Cognitive Systems. Series on Cognitive Models and Architectures. Oxford, UK: Oxford University Press.

[8] Albus, J. S. and Barbera, A. J. (2005). RCS: A cognitive architecture for intelligent multi-agent systems. Annual Reviews in Control 29 (1): 87-99.

[9] Anderson, J. (1976). Language, Memory and Thought. Hillsdale, NJ: Erlbaum Associates.

Page 47: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

[10] Anderson, J. R. and Lebiere, C. (1998). The Atomic Components of Thought. Mahwah: Lawrence Erlbaum Associates.

[11] Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? New York: Oxford University Press.

[12] Grossberg, S. (1987). Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11: 23-63.

[13] Rohrer, B. and Hulet, S. (2008). BECCA: a brain emulating cognition and control architecture. In De Jong, D. A., Ed. Progress in Biological Cybernetics Research, pp. 1-38. Nova Publishers.

[14] Rohrer, B., Bernard, M., Morrow, J. D., Rothganger, F., and Xavier, P. (2009). Model-free learning and control in a mobile robot. In 5th International Conference on Natural Computation, Tianjin, China, Aug 14-16, 2009.

[15] Kurup, U. and Chandrasekaran, B. (2007). Modeling memories of large-scale space using a bimodal cognitive architecture. In Proceedings of the International Conference on Cognitive Modeling, July 27-29, 2007, Ann Arbor, MI, 6 pages (CD-ROM).

[16] Chandrasekaran, B. (2006). Multimodal cognitive architecture: Making perception more central to intelligent behavior. Proceedings of the AAAI National Conference on Artificial Intelligence, pp. 1508-1512. Menlo Park, CA: AAAI Press.

[17] Arrabales, R., Ledezma, A., and Sanchis, A. (2009). A cognitive approach to multimodal attention. Journal of Physical Agents 3 (1): 53-64.

[18] Arrabales, R., Ledezma, A., and Sanchis, A. (2009). Towards conscious-like behavior in computer game characters. In Proceedings of the IEEE Symposium on Computational Intelligence and Games 2009 (CIG-2009), pp. 217-224.

[19] Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.

[20] Arrabales, R. Ledezma, A. and Sanchis, A. (2009). CERA-CRANIUM: A Test Bed for Machine Consciousness Research. Proceedings of the International Workshop on Machine Consciousness, Hong Kong.

[21] Hingston, P. (2009). A Turing test for computer game bots. IEEE Transactions on Computational Intelligence and AI In Games 1 (3): 169-186.

[22] Gobet, F., Lane, P. C. R., Croker, S., Cheng, P. C-H., Jones, G., Oliver, I. and Pine, J. M. (2001). Chunking mechanisms in human learning. TRENDS in Cognitive Sciences 5: 236-243.

[23] Gobet, F. and Lane, P. (2005). The CHREST architecture of cognition: Listening to empirical data. In D. Davis (Ed.). Visions of Mind: Architectures for Cognition and Affect (pp. 204-224). Hershey, PA: Information Science Publishing.

[24] Gobet, F., and Lane, P. C. R. (2010). The CHREST architecture of cognition: The role of perception in general intelligence. The Third Conference on Artificial General Intelligence. Lugano, Switzerland.

[25] Sun, R., Peterson, T., and Merrill, E. (1996). Bottom-up skill learning in reactive sequential decision tasks. Proceedings of 18th Cognitive Science Society Conference, Lawrence Erlbaum Associates, Hillsdale, NJ. pp.684-690.

[26] Sun, R. (2004). The CLARION cognitive architecture: Extending cognitive modeling to social simulation. In: Sun, R. (Ed.). Cognition and Multi-Agent Interaction. Cambridge University Press: New York.

[27] Sun, R. (2002). Duality of the Mind. Lawrence Erlbaum Associates, Mahwah, NJ. [28] Helie S. and Sun, R. (2010). Incubation, insight, and creative problem solving: A unified theory and a

connectionist model. Psychological Review 117 (3): 994-1024. [29] Sun, R., Slusarz, P., and Terry, C. (2005). The interaction of the explicit and the implicit in skill

learning: A dual-process approach . Psychological Review 112 (1): 159-192. [30] Goertzel, B. (2009). OpenCogPrime: A cognitive synergy based architecture for embodied general

intelligence. In Proceedings of ICCI-2009. [31] Goertzel, B. et al. (2010). OpenCogBot: Achieving generally intelligent virtual agent control and

humanoid robotics via cognitive synergy. In Proceedings of ICAI-10, Beijing. [32] Evertsz, R., Pedrotti, M., Busetta, P., Acar, H., and Ritter, F. E. (2009). Populating VBS2 with realistic

virtual actors. In Proceedings of the 18th Conference on Behavior Representation in Modeling and Simulation, pp. 1-8. 09-BRIMS-04.

[33] Tecuci, G. (1988). Disciple: A Theory, Methodology and System for Learning Expert Knowledge, Thèse de Docteur en Science, University of Paris-South.

[34] Tecuci, G. (1988). Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies. San Diego, CA: Academic Press.

[35] Tecuci, G., Boicu, M., Bowman, M., Marcu, D., with a commentary by Burke, M. (2001). An innovative application from the DARPA knowledge bases programs: Rapid development of a course of action critique. AI Magazine, 22 (2): 43-61.

Page 48: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

[36] Tecuci, G., Boicu, M., Boicu, C., Marcu, D., Stanescu, B., and Barbulescu, M. (2005). The Disciple-RKF learning and reasoning agent. Computational Intelligence, 21 (4): 462-479.

[37] Tecuci, G., Boicu, M., and Comello, J. (2008). Agent-Assisted Center of Gravity Analysis, CD with Disciple-COG and Lecture Notes used in courses at the US Army War College and Air War College, GMU Press.

[38] Tecuci, G., Boicu, M., Marcu, D., Boicu, C., and Barbulescu, M., Disciple-LTA: Learning, tutoring and analytic assistance. Journal of Intelligence Community Research and Development, 2008.

[39] Tecuci, G., Schum, D.A., Boicu, M., Marcu, D., and Hamilton, B. (2010). Intelligence analysis as agent-assisted discovery of evidence, hypotheses and arguments. In: Phillips-Wren, G., Jain, L.C., Nakamatsu, K., and Howlett, R.J. (Eds.) Advances in Intelligent Decision Technologies, SIST 4, pp. 1-10. Springer-Verlag, Berlin Heidelberg.

[40] Kieras, D. and Meyer, D. E. (1997). An overview of the EPIC architecture for cognition and performance with application to human-computer interaction. Human-Computer Interaction, 12: 391-438.

[41] Meyer, D. E. and Kieras, D. E. (1997). A computational theory of executive cognitive processes and multiple task performance: Part I. Basic mechanisms. Psychological Review 63: 81-97.

[42] Kieras, D. EPIC Architecture Principles of Operation (ftp://www.eecs.umich.edu/people/kieras/EPICtutorial/EPICPrinOp.pdf).

[43] Epstein, S. L. (2011). Learning expertise with bounded rationality and self-awareness. In Cox, M. T. and Raja, A. (Eds.). Metareasoning: Thinking about Thinking. Cambridge, MA: The MIT Press (forthcoming).

[44] Susan L. Epstein. 1994. For the Right Reasons: The FORR Architecture for Learning in a Skill Domain. Cognitive Science 18(3): 479-511.

[45] Epstein, S. L. and S. Petrovic. 2010 Learning Expertise with Bounded Rationality and Self-awareness. In Hamadi, Y. and Saubion, E. M. F. (Eds.). Autonomous Search. Springer.

[46] Shapiro, S. C. and Bona, J. P. (2009). The GLAIR cognitive architecture. In Samsonovich, A. V. (Ed.). (2009). Biologically Inspired Cognitive Architectures II: Papers from the AAAI Fall Symposium. AAAI Technical Report FS-09-01. Menlo Park, CA: AAAI Press.

[47] Shapiro, S. C. and Bona, J. P. (2010). The GLAIR cognitive architecture. International Journal of Machine Consciousness 2 (2): 307-332.

[48] Samsonovich, A. V., De Jong, K. A., and Kitsantas, A. (2009). The mental state formalism of GMU-BICA. International Journal of Machine Consciousness 1 (1): 111-130.

[49] Kalish, M. Q., Samsonovich, A. V., Coletti, M. A., and De Jong, K. A. (2010). Assessing the role of metacognition in GMU BICA. In Samsonovich, A. V., Johannsdottir, K. R., Chella, A., and Goertzel, B. (Eds.). Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society. Amsterdam, The Netherlands: IOS Press (this volume).

[50] Samsonovich, A. V. and De Jong, K. A. (2005). Designing a self-aware neuromorphic hybrid. In K.R. Thórisson, H. Vilhjalmsson, and S. Marsela (Eds.). AAAI-05 Workshop on Modular Construction of Human-Like Intelligence: AAAI Technical Report WS-05-08, pp. 71–78. Menlo Park, CA: AAAI Press (http://ai.ru.is/events/2005/AAAI05ModularWorkshop/papers/WS1105Samsonovich.pdf).

[51] Samsonovich, A. V., Ascoli, G. A., De Jong, K. A., and Coletti, M. A. (2006). Integrated hybrid cognitive architecture for a virtual roboscout. In M. Beetz, K. Rajan, M. Thielscher, and R.B. Rusu (Eds.). Cognitive Robotics: Papers from the AAAI Workshop, AAAI Technical Reports WS-06-03, pp. 129–134. Menlo Park, CA: AAAI Press.

[52] Hawkins, J. and Blakeslee, S. (2005). On Intelligence. New York: Times Books. [53] George, D.,] and Hawkins, J. (2009) Towards a mathematical theory of cortical micro-circuits. PLoS

Computational Biology 5 (10). [54] O’Reilly, R.C. (1996). Biologically plausible error-driven learning using local activation differences:

The generalized recirculation algorithm. Neural Computation 8: 895–938. [55] O’Reilly, R.C. and Munakata, Y. (2000). Computational Explorations in Cognitive Neuroscience:

Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press. [56] Jilk, D.J., Lebiere, C., O’Reilly, R.C. and Anderson, J.R. (2008). SAL: An explicitly pluralistic

cognitive architecture. Journal of Experimental and Theoretical Artificial Intelligence, 20: 197-218. [57] Franklin, S. and F. G. Patterson, Jr. (2006). The Lida architecture: Adding new modes of learning to an

intelligent, autonomous software agent. Integrated Design and Process Technology, IDPT-2006, San Diego, CA, Society for Design and Process Science.

[58] Franklin, S., and Ferkin, M. H. (2008). Using broad cognitive models and cognitive robotics to apply computational intelligence to animal cognition. In Smolinski, T. G., Milanova, M. M., and Hassanien, A.-E. (Eds.). Applications of Computational Intelligence in Biology: Current Trends and Open Problems, pp. 363-394. Berlin: Springer-Verlag.

Page 49: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

[59] Franklin, S. (2007). A foundational architecture for artificial general intelligence. In Goertzel, B. and Wang, P. (Eds.). Advances In Artificial General Intelligence: Concepts, Architectures and Algorithms, Proceedings of the AGI Workshop 2006, pp. 36-54. Amsterdam, The Netherlands: IOS Press.

[60] Wang, P. (2006). Rigid Flexibility: The Logic of Intelligence. Berlin: Springer. [61] Vashist, A. and Loeb, S. (2010). Attention focusing model for nexting based on learning and reasoning.

In Samsonovich, A. V., Johannsdottir, K. R., Chella, A., and Goertzel, B. (Eds.). Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society. Amsterdam, The Netherlands: IOS Press (this volume).

[62] Brom, C., Pešková, K., Lukavský, J.: What does your actor remember? Towards characters with a full episodic memory. In Proceedings of 4th International Conference on Virtual Storytelling, LNCS. Springer.

[63] Gemrot, J., Kadlec, R., Bida, M., Burkert, O., Pibil, R., Havlicek, J., Zemcak, L., Simlovic, J., Vansa, R., Stolba, M., Plch, T., Brom C. (2009). Pogamut 3 can assist developers in building ai (not only) for their videogame agents. In: Agents for Games and Simulations, LNCS 5920, pp. 1-15. Springer.

[64] Brom, C., Lukavský, J., Kadlec, R. (2010). Episodic memory for human-like agents and human-like agents for episodic memory. International Journal of Machine Consciousness 2 (2): 227-244.

[65] Cassimatis, N.L., Trafton, J.G., Bugajska, M.D., and Schultz, A.C. (2004). Integrating cognition, perception and action through mental simulation in robots. Journal of Robotics and Autonomous Systems 49 (1-2): 13-23.

[66] Cassimatis, N. L., Bignoli, P., Bugajska, M., Dugas, S., Kurup, U., Murugesan, A., and Bello, P. (in press). An Architecture for Adaptive Algorithmic Hybrids. IEEE Transactions on Systems, Man, and Cybernetics. Part B.

[67] Coward, L. A. (1990). Pattern Thinking. New York: Praeger. [68] Coward, L.A. (2001). The Recommendation Architecture: lessons from the design of large scale

electronic systems for cognitive science. Journal of Cognitive Systems Research 2: 111-156. [69] Coward, L.A. (2005). A System Architecture Approach to the Brain: from Neurons to Consciousness.

Nova Science Publishers, New York. [70] Coward, L.A., and Gedeon, T.O. (2009). Implications of resource limitations for a conscious machine.

Neurocomputing 72: 767-788. [71] Coward, L. A. (2010). The hippocampal system as the cortical resource manager: a model connecting

psychology, anatomy and physiology. Advances in Experimental Medicine and Biology 657: 315 - 364. [72] Murdock, J.W. and Goel, A.K. (2001). Meta-case-based reasoning: using functional models to adapt

case-based agents. Proceedings of the 4th International Conference on Case-Based Reasoning (ICCBR'01). Vancouver, Canada, July 30 - August 2, 2001.

[73] Murdock, J.W. and Goel, A.K. (2003). Localizing planning with functional process models. Proceedings of the Thirteenth International Conference on Automated Planning and Scheduling (ICAPS'03). Trento, Italy.

[74] Ulam, P., Goel, A.K., Jones, J., and Murdock, J.W. (2005). Using model-based reflection to guide reinforcement learning. Proceedings of the IJCAI 2005 Workshop on Reasoning, Representation and Learning in Computer Games. Edinburgh, UK.

[75] Murdock, J.W. and Goel, A.K. (2008). Meta-case-based reasoning: self-improvement through self-understanding. Journal of Experimental and Theoretical Artificial Intelligence, 20(1):1-36.

[76] Laird, J.E., Rosenbloom, P.S., and Newell, A. (1986). Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies. Boston: Kluwer.

[77] Laird, J.E., Newell, A., and Rosenbloom, P.S., (1987). SOAR: An architecture for general intelligence. Artificial Intelligence 33: 1-64.

[78] J. E. Laird (2008). Extending the Soar cognitive architecture. In P. Wang, B. Goertzel and S. Franklin, eds. Artificial General Intelligence 2008: Proceedings of the First AGI Conference, pp. 224-235. Amsterdam, The Netherlands: IOS Press.

[79] Huffman, S.B., and Laird, J.E. (1995). Flexibly instructable agents. Journal of Artificial Intelligence Research 3: 271-324.

[80] Thórisson, K. R. (1996). Communicative Humanoids: A Computational Model of Psycho-Social Dialogue Skills. Ph.D. Thesis, Media Laboratory, Massachusetts Institute of Technology.

[81] Thórisson, K. R. (1999). A mind model for multimodal communicative creatures and humanoids. International Journal of Applied Artificial Intelligence, 13(4-5): 449-486.

[82] Thórisson, K. R. (2002). Machine perception of multimodal natural dialogue. In P. McKevitt (Ed.). Language, Vision and Music. Amsterdam: John Benjamins.

[83] Thórisson, K. R. (1998). Real-time decision making in face to face communication. Second ACM International Conference on Autonomous Agents, Minneapolis, Minnesota, May 11-13, 16-23.

[84] Thórisson, K. R. (1997). Layered modular action control for communicative humanoids. Computer Animation '97, Geneva, Switzerland, June 5-6, 134-143.

Page 50: Toward a Unified Catalog of Implemented Cognitive ... a Unified Catalog of Implemented Cognitive Architectures Alexei V. SAMSONOVICH Krasnow Institute for Advanced Study, George Mason

[85] Thórisson, K. R. (2002). Natural turn-taking needs no manual: a computational theory and model, from perception to action. In B. Granström (Ed.). Multimodality in Language and Speech Systems. Heidelberg: Springer-Verlag.

[86] Ng-Thow-Hing, V., K. R. Thórisson, R. K. Sarvadevabhatla, J. Wormer and Thor List (2009). Cognitive map architecture: facilitation of human-robot interaction in humanoid robots. IEEE Robotics and Automation Magazine, 16(1):55-66.

[87] Rohrer, B., Morrow, J. D., Rothganger, F., and Xavier, P. G. (2009). Concepts from data. In Samsonovich, A. V. (Ed.). Biologically Inspired Cognitive Architectures II: Papers from the AAAI Fall Symposium. AAAI Technical Report FS-09-01, pp. 116-122. Menlo Park, CA: AAAI Press.