Achieving Advanced Machine Consciousness via Artificial General Intelligence in Virtual Worlds Ben Goertzel, PhD.

Post on 17-Dec-2015

219 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

Transcript

Achieving Advanced Machine Consciousness via Artificial General

Intelligence in Virtual Worlds

Ben Goertzel, PhD

Contents

1. The Nature of Consciousness2. Artificial General Intelligence versus Narrow AI3. The Novamente and OpenCog AGI Projects4. The Marriage of AGI and Virtual Worlds5. Initial Application: Virtual Pet Brain

A Useful Philosophical PerspectiveOn Consciousness

In

Metaphysical Foundation:

Peircean/Jungian Categories

First: raw, unprocessed being … e.g. qualia

Second: reaction … e.g. pure physical reaction

Third: relationship

(beyond Peirce … “Fourth: synergy”, etc.)

Archetypal Perspectives

First person: firstness of X … the world as directly experienced … the stream of qualia …

Third person: thirdness of X … the world as an objective relational structure, a network of patterns

Fourth person (normally called “second person”): fourthness of X … the synergy of relationships … the Buber-ian I-You

The real second person: secondness of X … experiencing the world as an automaton?

Inter-perspective correlations

Example of a hypothesis spanning perspectives:

The more intense qualia experienced by a system, correspond to the more informationally significant patterns detectable in that system by an intelligent, well-informed observer

Reflective consciousness and other emergent constructs

Hypothesis:

Among the more informationally significant patterns in generally intelligent systems are:

•The phenomenal self•Reflective consciousness•The illusion of will

Modeling Reflective Consciousness,

Self and Will Using Hypersets

Hypothesis:

The qualia we humans describe as “reflective awareness”, “self” and “will” correspond to patterns in our brains that are conveniently expressible in terms of hypersets (non-well-founded sets)

Modeling Reflective Consciousness, Self and Will

Using Hypersets

“S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}, where S is an intelligent system’s phenomenal self

"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X”}, where S is an intelligent system’s phenomenal self

"X is part of S's self" is defined as: The declarative content that {"X is a part of S's self" correlates with "X is a persistent pattern in S over time"}

Evaluating Hypersets as Patterns

in Dynamical Systems

The hyperset defined by X = F(X) may be evaluated as a pattern in a system by comparing the iterates

AF(A)F(F(A))…

to the system’s trajectory at various times for various A

Summary

•There are multiple archetypal perspectives: First, Second, Third, Fourth person,…

•There are correlations between the different perspectives (e.g. intense qualia correspond to informational patterns)

•There are specific emergent structures (self, will, reflection) that correlate with intense patterns/qualia in generally intelligent systems

•It may be interesting to model these emergent structures using hypersets

Artificial General Intelligence versus Narrow AI

In

Artificial General Intelligence (AGI)

“The ability to achieve complex goals in complex environments using limited computational resources”

• Autonomy• Practical understanding of self and others• Understanding “what the problem is” as opposed to just solving problems posed explicitly by programmers •Solving problems that were not known to the programmers

Narrow AI

The vast majority of AI research practiced in academia and industry today fits into the “Narrow AI” category

Each “Narrow AI” program is (in the ideal case) highly competent at carrying out certain complex goals in certain environments

• Chess-playing, medical diagnosis, car-driving, etc.

Today, Narrow AI Dominates the AI Field (in both academia and applications)

Deep Blue - whoops us pesky humans at chess - but can’t learn to play a new game based on a description of the game rules

DARPA Grand Challenge - a great leap forward -- but it can’t learn to drive different types of vehicles besides cars (trucks, boats, motorcycles)

Google - fantastic service: but can’t answer complex questions. Whatever happened to AskJeeves?

2001

Artificial General Intelligence (AGI)

Hypothesis: Human-level general intelligence naturally comes along with the emergence of

• Phenomenal self• Reflective consciousness• Illusion of free will

A Pragmatic, IntegrativeApproach to Advanced AGI

In

Novamente Cognition Engine

The Novamente Cognition Engine (NCE) represents a serious scientific/engineering effort to create powerful artificial general intelligence, via an integrative, computer science based approach

While the NCE may be applied in many different domains, the most natural way to develop and apply it, at the current stage, is in the context of controlling physically and/or virtually embodied intelligent agents

For more detail on the NCE, see novamente.net/papers

Open Cognition Framework

The OpenCog project (opencog.org) is an open-source offshoot of the Novamente project, which has been seeded in 2008 with significant AGI code donated by Novamente LLC

It includes the RelEx NL comprehension system, founded on the CMU link parser plus additional rule-based and statistical NLP methods

The essential dynamics of these AGI systems follows the basic logic of animal behavior:

Enact a procedure so thatContext & Procedure ==> Goals

i.e.

at each moment, based on its observations and memories, the system chooses to enact procedures that it estimates (based on the properties of the current context) will enable it to achieve its goals, over the time-scales these goals refer to

There is an important distinction between explicit goals and implicit goals

Explicit goals: the objective-functions the system explicitly chooses actions in order to maximize

Implicit goals: the objective-functions the system actually does habitually maximize, in practice

For a system that is both rational, and capable with respect to its goals in its environment, these will be basically the same. But in many real cases, they may be radically different

Goal Dynamics

A sufficiently intelligent system is continually creating new subgoals of its current goals

Some intelligent systems may be able to replace their top-level supergoals with new ones, based on various dynamics

Goals may operate on radically different time-scales

Humans habitually experience “subgoal alienation” -- what was once a subgoal of some other goal, becomes a top-level goal in itself. AI’s need not be so prone to this phenomenon

1. Knowledge Representation2. Cognitive Architecture3. Knowledge Creation4. Environment / Education (incl.

physical & virtual robotics)5. Emergent Structures and Dynamics

There is no single, mechanism-level “magic trick” at the heart of general intelligence … rather, intelligence arises in appropriately-constructed complex systems as an emergent phenomenon.

The trick is to figure out what sorts of complex systems will give rise to general intelligence as an emergent property.

There is unlikely to be “one correct answer” to this question … but all we need to build the first thinking machine is one of the many correct answers.

Five key aspects of AGI design:

The Novamente/OpenCog high-level cognitive architecture is based on the state of the art in cognitive psychology and cognitive neuroscience. Most cognitive functions are distributed across the whole system, yet principally guided by some particular module.

Perception Action& Feeling Nodes

Abstract Concepts(some corresponding to

named concepts, some not)Specific Objects,

Composit Actions,Complex Feelings

joint_53_actuatoris ON at 2:42:01,May 1, 2008

pixel at (100,50)is RED at 1:42:01,May 1, 2008

raise_arm_55

table

food

raiselegs

tabletable_754

raise_arm

Unique hypergraph knowledge representation bridges the gap between subsymbolic (neural net) and symbolic (logic / semantic net) representations, achieving the advantages of both, and synergies resulting from their combination.

Each cognitive processing machine, within each unit, contains an “Atom Space” full of nodes and links representing knowledge, plus a set of cognitive processes acting on this Atom Space, encapsulated in software objects called MindAgents and scheduled by a Scheduler object.

Mind Agents

Mind Agents

Atom Space

Mind Agents

Mind Agents

Novamente Machine

Each box in the cognitive architecture diagram, corresponds at the software level to a cluster of machines called a “unit”, containing a local persistent DB plus one or more cognitive processing machines.

MOSES Probabilistic Evolutionary Learning (for gaining procedural knowledge directly)

Combines the power of two leading AI paradigms: evolutionary and probabilistic learning

Extremely broad applicability. Successful track record in bioinformatics, text and data mining, and virtual agent control.

Probabilistic Logic Networks(for gaining declarative knowledge directly)

The first general, practical integration of probability theory and symbolic logic.

Extremely broad applicability. Successful track record in bio text mining, virtual agent control.

Based on mathematics described in Probabilistic Logic Networks, published by Springer in 2008

Algorithms for Procedural and Declarative Knowledge Creation

Economic Attention Allocation

Each node or link in the knowledge network is tagged with a probabilistic truth value, and also with an “attention value”, containing Short-Term Importance and Long-Term Importance components.

An artificial-economics-based process is used to update these attention values dynamically -- a complex, adaptive nonlinear process.

The system contains multiple heuristics for Atom creation, including “blending” of existing Atoms

Atoms associated in a dynamic “map” may be grouped to form new Atoms: the Atomspace hence explicitly representing patterns in itself

Hypothesis: Integrative Design Can Allow Multiple AI Algorithms to Quell Each Others’ Combinatorial

Explosions

Probabilistic Evolutionary Program Learning Probabilistic

Logical Inference

Economic Attention Allocation

Pattern Mining

Overall Philosophy

Algorithms for declarative and procedural knowledge creation and attention allocation …

integrated with appropriate synergy and acting on an appropriately powerful knoweldge representation …

used to control a system pursuing complex goals …

may lead to the emergence of system structures characteristic of general intelligence

Why Do I Believe I Can Succeed When So Many Others Have Failed?

• Approach is based on a well-reasoned, comprehensive theory of mind, which dictates a unified approach to the five key aspects mentioned above

Knowledge representation Learning/reasoning Cognitive architecture Embodiment / interaction Emergent structures / dynamics Cognitive Theory summarized in The Hidden Pattern (Ben

Goertzel, Brown Walker Press, 2006)

• The specific algorithms and data structures chosen to implement this theory of mind are efficient, robust and scalable and, so is the software implementation

The Marriage of AGIand Virtual Worlds

In

How Important Is Embodiment?

Some AI theorists believe that robotic embodiment is necessary for the achievement of powerful AGI

Others believe embodiment is entirely unnecessary

We believe embodiment is extremely convenient for AGI though perhaps not strictly necessary; and that virtual-world embodiment is an important, pragmatic and scalable approach to pursue alongside physical-robot embodiment

Public virtual worlds provide a wonderful opportunity for teaching baby AI’s: not only the experience of

embodiment, but the massive plus of having hundreds of thousands or millions of teachers helping the AI to learn

Current virtual world platforms have some fairly severe limitations, which fortunately are fairly easily remedied

Object-object interactions are oversimplified, making tool use difficult

Agent control relies on animations and other simplified mechanisms, rather than having virtual servomotors associated with each joint of an agent’s skeleton

Example solution: Integration of a robot simulator with a

virtual world engine

Player / Gazebo: 3D robot control + simulation framework

RealXTend/OpenSim: open-source virtual world

It seems feasible to replace OpenSim’s physics engine with appropriate components of Player/Gazebo, and make coordinated OpenSim client modifications

+

CognitionEngine

non-parametrizedbehavior signals

e.g. “take one step forward”

high-level perceptual data

Coordinates of objects, Labeled with type

Cognitive Control of agents in current virtual worlds -- e.g. Second Life, Multiverse, HiPiHi

action signals

raw perceptions Perceptualpreprocessor

Behavioral postprocessor

behavior signals

mid-level perceptual data

CognitionEngine

e.g. ”Force F exerted by servomotor M in direction D”

e.g. video output of camera eyes

e.g. 3D polygonal mesh,marked up with limited objectIdentification information

e.g. “take one step forward,using gait parameter vector V”

Neural net module evolver

Behavioral modules

Object classification modules

Hybrid Generally-Intelligent Robot Brain Architecture, version 1

Application:Novamente Pet Brain

In

Novamente Pet Brain

The Pet Brain incorporates MOSES learning to allow pets to learn tricks, and Probabilistic Logic Networks (PLN) inference regulates emotion-behavior interactions, and allows generalization based on experience.

The Pet Brain utilizes a specialized version of the Novamente Cognition Engine to provide unprecedentedly intelligent virtual pets with individual personalities, and the ability to learn spontaneously and through training.

Pets understand simple English; and future versions to include language generation

Demo Screenshots: TrainingNovamente-powered smart pets can be taught to do simple or complex tricks - from sitting to playing soccer or learning a dance - by learning from a combination of encouragement, reinforcement and demonstration.

give “sit” command… reinforce and/or correct. show example… successful sit, great…

ReinforceImitateTeach Correct

Teaching with a PartnerIn partner-based teaching, the pet understands that one avatar is the teacher and the other is the student, whose interactions with the teacher the pet is supposed to understand, abstract, and imitate

Next Step: Language Learning

Our initial virtual pets have robust but simplistic language understanding, sufficient to learn an unlimited variety of commands

In the next version, integration of Novamente’s RelEx language processing system with the Novamente Pet Brain will provide a more powerful approach to embodied language learning

With human-controlled avatars as language teachers, Novamente-controlled virtual agents will be able to rapidly improve their language comprehension and generation via adaptive learning

Next-Gen Pet/Baby Brain Architecture

The next generation of the Avatar Brain will incorporate additional modules allowing language processing and more advanced inference -- the next step on the path from virtual dogs to human-level virtually-embodied AGIs

Deep understanding and control of self structures and dynamics

Full Self-Modification

Reflexive

Formal

Concrete

Infantile

Abstract reasoning and hypothesizing. Objective detachment from phenomenal self.

Rich variety of learned mental representations and operations thereon. Emergence of phenomenal self.

Making sense of and achieving simple goals in sensorimotor reality. No self yet.

Stages of Development of an AGI

Intelligence…

Intelligence…

Intelligence…

The Coming Technological Singularity Verner Vinge (1993)

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended…

When greater-than-human intelligence drives progress, that progress will be much more rapid”

“I set the date for the Singularity- representing a profound and disruptive transformation in human capability- as 2045.

The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today."

The Singularity is Near, When Humans Transcend Biology - Ray Kurzweil (2005)

top related