Top Banner
8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 1/23 Abstract  Nowadays for robots, the notion of behav- ior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the conscious- ness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot’s behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot’s body will be seen for it- self as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent’s organizations with a morpho- logic control. Keywords  Artificial consciousness    Intentionality   Representation    Organizational memory    Artificial emotions    Embodiment    Multi-agent systems   Morphology Introduction We deal with the notion of artificial consciousness facts, i.e., the facts that living beings endowed with brain are able to generate in their mind about the things of the world they can conceive. The problem is neither about the performance nor the qualities of the faculty to thought about things, but it is about the fundamental ability to make here and now a repre- sentation about the things we manipulate in an abstract but intelligible manner in mind. By analogy with the notion of consciousness for a long time invested by philosophers, psychologists and neurobiologists, we will pose the question of the artifi- cial consciousness strictly in a constructivist way: how can one transpose the fact of ‘‘thinking to something’’ into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner for us, the interested observers. The problem is to find the good approach and the good level for the production of a realistic model. On the reality of the life we have, on one hand, a neural network made of very numerous cells. We have, on the other hand, our mind and the impression we can have about this component of our- selves as generating sophisticated representations about things of the world. How to transpose into the computable field? The fact it is possible, for the brains, to think about things, in what way an artificial system can generate an intelligible representation of things and facts, that will be the state of this system considered as having intentions, emotions, ideas by the way of things and events concerning itself? This system necessarily must be linked to a body that it directs, but whose constraints must it respect? But it also must have Communicated by Ge ´ rard Sabah. A. Cardon (&) LIP6, Laboratoire d’informatique de Paris VI, UPMC Case 169, 4 Place Jussieu, 75252 Paris Cedex 05, France e-mail: [email protected] Cogn Process (2006) 7:245–267 DOI 10.1007/s10339-006-0154-7  1 3 RESEARCH REPORT Artificial consciousness, artificial emotions, and autonomous robots Alain Cardon Received: 21 January 2006 /Revised: 24 August 2006/ Accepted: 8 September 2006/Published online: 3 October 2006  Marta Olivetti Belardinelli and Springer-Verlag 2006
23

Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

Jun 04, 2018

Download

Documents

altemio
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 1/23

Abstract   Nowadays for robots, the notion of behav-

ior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a

very cultural concept, founding the main property of 

human beings, according to themselves. We propose to

develop a computable transposition of the conscious-

ness concepts into artificial brains, able to express

emotions and consciousness facts. The production of 

such artificial brains allows the intentional and really

adaptive behavior for the autonomous robots. Such a

system managing the robot’s behavior will be made of 

two parts: the first one computes and generates, in a

constructivist manner, a representation for the robot

moving in its environment, and using symbols andconcepts. The other part achieves the representation of 

the previous one using morphologies in a dynamic

geometrical way. The robot’s body will be seen for it-

self as the morphologic apprehension of its material

substrata. The model goes strictly by the notion of 

massive multi-agent’s organizations with a morpho-

logic control.

Keywords   Artificial consciousness   Intentionality 

Representation   Organizational memory    Artificial

emotions    Embodiment    Multi-agent systems  

Morphology

Introduction

We deal with the notion of artificial consciousness

facts, i.e., the facts that living beings endowed with

brain are able to generate in their mind about the

things of the world they can conceive. The problem is

neither about the performance nor the qualities of the

faculty to thought about things, but it is about the

fundamental ability to make here and now a repre-

sentation about the things we manipulate in an abstract

but intelligible manner in mind.

By analogy with the notion of consciousness for a

long time invested by philosophers, psychologists and

neurobiologists, we will pose the question of the artifi-cial consciousness strictly in a constructivist way: how

can one transpose the fact of ‘‘thinking to something’’

into the computable field, so that an artificial system,

founded on computer processes, would be able to

generate consciousness facts, in a viewable manner for

us, the interested observers. The problem is to find the

good approach and the good level for the production of 

a realistic model. On the reality of the life we have, on

one hand, a neural network made of very numerous

cells. We have, on the other hand, our mind and the

impression we can have about this component of our-

selves as generating sophisticated representations

about things of the world. How to transpose into the

computable field? The fact it is possible, for the brains,

to think about things, in what way an artificial system

can generate an intelligible representation of things and

facts, that will be the state of this system considered as

having intentions, emotions, ideas by the way of things

and events concerning itself? This system necessarily

must be linked to a body that it directs, but whose

constraints must it respect? But it also must have

Communicated by Ge rard Sabah.

A. Cardon (&)LIP6, Laboratoire d’informatique de Paris VI, UPMC Case169, 4 Place Jussieu, 75252 Paris Cedex 05, Francee-mail: [email protected]

Cogn Process (2006) 7:245–267

DOI 10.1007/s10339-006-0154-7

 1 3

RE S E A RC H RE PO RT

Artificial consciousness, artificial emotions, and autonomousrobots

Alain Cardon

Received: 21 January 2006 / Revised: 24 August 2006/ Accepted: 8 September 2006 / Published online: 3 October 2006  Marta Olivetti Belardinelli and Springer-Verlag 2006

Page 2: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 2/23

‘‘artificial’’ real life experiences, intentions to act and to

think, and must have knowledge and notably a knowl-

edge using words of language, it must have emotions

and intentions, and finally it must be conscious of itself.

We shall call such a system, by analogy with brain, an

artificial brain   but we shall see that its architecture is

strong moved away of the one of the brains: it is going

to transpose effects, movements, and not replicates, thebrain constituents like the neurons or gliale cells.

We will keep in mind principally one characteristic of 

the process of thinking unfolding in the brain: there is a

complex neural, biochemical, and electrical activation

movement. This movement is coupled, but with a dif-

ferent period, a similar one in the body nervous system.

This very complex movement generates, by selective

emergence reaching a particular configuration, what we

call a thought about something. This thought rapidly

leads to actuators or language activity and descends

then in the next thought, which can be similar or dif-

ferent. This is the very complex phenomenon that wehave to transpose in the computable domain.

Hence, we will approach the sudden appearance of 

thoughts in brains at the level of the complex dynamics

of a system building and reconfigure recurrent and

temporized flux. We transpose this into computer

processes architectures containing symbolic meaning

and we will make it geometrically self-controlled. Two

reasonable hypotheses are made for this transposition:

•   Analogy between the geometrical dynamics of the

real brain and the one of the artificial brain: in the

first case, flows are complex continuous forms andfor the other they are dynamic graphs whose

deformations are evaluated in a topological manner,

•   Combinatory complexity reduction of the real brain

in the computable domain by using symbolic and

pre-language level for this approach. The basic

elements are completely different; they are not of 

the same scale.

However, once these hypotheses made, one should

not start to develop an architecture that will operate its

own control from the aspects of its changing geometry.

One needs to ask the proper question about con-

sciousness fact generation. This question was asked by aphilosopher a couple of decades ago, by M. Heidegger

(Heidegger   1959): what brings us to think about this

thing right here right now? The answer, quite elaborate,

to this question will lead to a system architecture choice

that will take us away from reactive or deductive sys-

tems. The system will generate intentionally its con-

sciousness facts, as P. Ricœur understood it (Ricœur

1990). There is no generation of consciousness facts

without intention to think about something. This settles

the question, considered a formidable one, of freedom

to think (Ricœur   1990). One thinks indeed about

everything according to our memory and our intuition

of the moment, but only if it can be expressed as a

thought by the thought producing system.

We develop in this article the main points of a

software architecture allowing a robot to produce facts

of consciousness and we show the results of a prototypeon autonomous robot. The system presented is generic

and can be applied to all computerized process of 

continuous data streams, without material body.

Concepts and general architecture

We consider an autonomous robot where the material

body is viewed as a physical   substratum, and we con-

sider a software system generating possible represen-

tations of behaviors, that anticipate these behaviors

before doing the actual actions (Brooks   1991). Wewant the robot to be able to develop an adaptive and

purposeful behavior in the environment it is running.

The substratum, the material part, the control—com-

mand part with the physical components indeed, con-

tains the sensors and their memories, representing

information only at the numerical level. The robot’s

behavior is dependent on its actuators capabilities. We

will say that its behavior is  adaptive   if it is not irratio-

nal, not chaotic, and not strictly determined, but is

necessarily, for itself, an adequate situation in its cur-

rent environment (Mataric 1995). The question is how

to define a system that produces the reasons and the

deep incentives for such a behavior.

Functionality of the system generating artificial

representations

We consider the robot as having the capacity to hold

information in real time from its sensors taking into

account the things of its environment, and to access a

specific internal memory about things and facts. This

general information will be interpreted and continu-

ously generates the current states of a specific system(Fig. 1). Let’s specify the general functionalities this

system must have, with three functional subsystems:

1. The system has a subsystem taking external and

internal information. The external capture can be

achieved using multiple specific neuron networks

for form recognition.

2. It has a subsystem of action that allows the

manipulation of its different motor organs in

reactive or original planned ways.

246 Cogn Process (2006) 7:245–267

 1 3

Page 3: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 3/23

3. It must possess a specific subrepresentation system,

with an architecture made of numerous linked

evolving parts, allowing to continuously and

deliberately construct a state of the robot’s situa-

tion in its environment, taking into account the

capacities of its body, and its security.

We will make the systemic hypothesis that these

three subsystems are dependent and co-active: no one

is activated as a strict consequence of the activation of 

another, but they reinforce and compensate them-

selves. They are therefore at time competiting and

cooperating between themselves. In addition, we will

state that these three subsystems are always in activity,

defining the running or the continuous robot’s artificial

life. The process (this term is here according to the

common sense of mechanism) that activates and

coordinates the three subsystems is always on. Sub-

systems exist therefore by their conjoined and corre-

sponding activations: the central principle that drives

their global behavior is a linking process that will be

able to analyze itself. We call this process the central

process (Fig. 1). The fact that the system is able to

coordinate its three subsystems expresses its dynamicorganization.

According to the capacities of the two input and

activity subsystems, the robot is immersed in the

environment it will be able to discern as very rich or

relatively poor. It is clear that the system of generation

of representations will be conditioned by the quality of 

the production of these two functional subsystems. But

why is the representation system is running in a no

reactive but intentional way?

The central process and the two main questions

The three subsystems: input, action, and generation

of the current representation, are always in co-

activity in a continuous process called the central

process, that coordinates their activation and ex-

presses the system global organizational activity,according to its embodiment. This is a distributed

process. The main questions are why is the system

running and why it stabilized into the current state

expressing something pertinent.

Without elements of answers to these two main

questions, there is nothing constructivist to say about

artificial consciousness.

The notion of generated representation

We are interested a system representing, on its own, itssituation in the environment, to achieve actions there.

That is the so-called ‘‘   representation system  ’’.

The representation system

That is strictly a software system allowing produc-

tion, from information picked up at the substratum

level, a purposeful internal representation leading to

a behavioral action or the generation of another

internal representation. This system is based on an

infinite process achieving a strong link between thesubstratum that can feel and act physically on

the environment, and an internal memory allowing

the generation of the pertinent current representa-

tion. This software system will be the computable

transposition of a brain working.

We can propose a constructivist hypothesis about

the representation system architecture.

The basic components

For the representation system, all the used basic

components will be specific proactive processes that

are like oscillators in an organization managing its

control in a distributed way.

This system, according to a real mind, must have

several characteristics and numerous parts, like an

episodic memory, a deep memory, a meso-limbic sub-

Physicalaction

System

InputSystem

System of Representation

Generation of thecurrent situation

CentralProcess

Fig. 1   The three subsystems for the generation of representa-tions in a current situation

Cogn Process (2006) 7:245–267 247

 1 3

Page 4: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 4/23

Page 5: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 5/23

morphological organization. The morphology is ex-

pressed in an abstract organizational space, a geomet-

ric space of shapes (Cardon 1999), that we will describe

later.

Let’s specify what is a state of this system, what is the

configuration of the representation system producing

something of significance. This notion of state takes

into account two notions: a set of local semantic no-tions coming from the basic aspectual entities and a

global notion coming from the morphology of active

entity set:

The system aspectual state

An aspectual state is a construct composed of pro-

active elements that are adapted to for the things

conceivable by the robot, where the proactive basic

element organization is stabilized for one short

moment under the morphological control. It will be

the equivalent of an intelligible idea about some-thing. The state is constructed like a specific dy-

namic set of entities, each of them representing very

local characteristic features, the whole active form-

ing a global structure which is coherent, taking in

account some tendencies, some memorized facts and

some specific configuration.

The representation system and the generation

of meaning

The problem of the generation of a representation withsignificance can be expressed in the following way. The

robot is in an environment where it can distinguish a

lot of things, item by item. Its sensors are active, its

organizational memory is available and it must decide

its behavior here and now. This behavior will be the

result of the current generated representation that will

permit to take a meaningful action, that is:

•   To represent something that is going to have a

significance, that is rationally and emotionally

generated in itself, in consistency with its previous

states to respect the continuity,•   To feel this representation with artificial emotions,

•   To act by its actuators,

•   To know that the representation is its own, in the

sense that it is possible to use it later to affect the

next ones, that is to force the next states of 

generation.

So, what is the architecture and the capacity of 

reorganization of the representation system that would

permit to have these properties, to destabilize its cur-

rent organization from initial tensions, internal and

external tensions, to change its dynamic order of re-

lated parts, to stabilize it in a temporary state that the

morphological system would be able to feel as such,

and to take an immediate decision to act? Could such a

state be seen as the production of meaning by means of 

the movements of the objects the system feels, and how

to generate such a state?The solution to these problems requires answering

the following questions:

1. What is the necessary complexity degree to such a

system?

2. What are the architecture and the control charac-

teristics that would permit the temporary stabil-

ization into a state expressing the significance

according to something meaningful in the envi-

ronment or in the memory?

3. How to generate a reason to destabilize the system

that is not an a priori pre-defined reason?4. How to define the system stabilization process,

which must be only temporary?

5. How to define the way the system is able to rep-

resent what it produces as a steady state one in-

stant and that it can use thereafter?

The first hypothesis concerns the very general

characteristic of a system able to generate meaning.

In reference to the typically complex brain structure,

we make the hypothesis that only the systems whose

organization is complex in an organizational way have

the capacity to generate meaning (Cardon  2005). We

will consider a system qualified of   organizationallycomplex   therefore, that is organized of large sets of 

interactive elements and that has a ‘‘ dynamic order

of parts and processes in mutual interactions’’ (Ber-

talanffy 1973; Clergue 1997). Such a system is formed

of a very large number of elements, each of them

having a behavioral autonomy. That will be software

agents. Inter-relations between these elements will

produce the current state as an alteration of some

form. Then, the general system behavior will be

essentially characterized by the reorganization of the

local behavior of these basic elements, by the modi-

fication of their couplings and their internal modifi-cations themselves.

Basic elements: the aspectual agents and the notion

of morphology

The basic entities used in the current representation

construction are dynamic, proactive, rational and ‘‘so-

cial’’. That will be software agents called   aspectual 

agents   (Cardon   1999). The general aspectual agent’s

Cogn Process (2006) 7:245–267 249

 1 3

Page 6: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 6/23

structure will be expressed with four parts, more or less

developed according to the agent specificity (Fig.  3):

•   A module for knowledge,

•   A module for communication,

•   A module for behavior,

•   A module for action.

We have specified an incremental method of con-struction for a massive multi-agent organization (Car-

don 2004). Every agent will be considered like a simple

enough software entity: it will be a weak agent that

communicates with the others and whose behavior

takes into account the result of these communications.

Aspectual agents represent a lot of categories, de-

rived from ontology, that refer to:

•   The space and its different possible description

modes (the permanent and regular shapes)...

•   The time and its modes (the notion of time that is

passing out)...•   The designation of well-identified things (the

detachment of something from a set of shapes) ...

•   The definition of a concept, a relation, a word, a

form, a thing ...

•   The situation of an object (the appreciation, the

utility, the worry...)...

•   The possibility to manage the organization itself 

(the proto-self component elements) ...

All these general and abstract characteristics should

be turned into different groups of aspectual agents that

are active according to some cases and where the group

characteristics are variable. This process will be acti-vated according to the information provided by the

linked sensors aspectual agents.

One kind of aspectual agent is bound to the robot

input and output. While preserving the robotic termi-

nology, these agents are of two types:

•   The   sensor aspectual agents: they interpret the

information coming from the environment,

•   The   actuators aspectual agents: they propose an

actual action of the robot’s organs.

All the numerous other aspectual agents are strictly

internal agents. They define concepts and very local

actions and plans: the numeric synthesis of their past

actions, the proposed current action, the possible val-

ues of their future actions. Let’s notice that this notionof plan is strictly local to agent and only defined by the

values of internal variables, allowing memorizing the

details of the local actions.

Agent organization is therefore a system whose

behavior is based on the variable interactions between

the agents (Fig. 4). We will choose at times a finer

granularity in regard to functionalities, and typically

‘‘plurivoc’’ ones, that is to say based on the redundancy

and the plurality of the characteristics. We state that the

system will have its functionalities distributed in no

steady and no consistent agents: for every precise

functionality we will associate groups of agents whosecooperative actions should permit to achieve the func-

tion in question, but also its opposite, inverse, contrary,

near and similar functions... The agentification method

leads to a certain redundancy and large diversity with

regard to faculties of agents. That will be necessary to

permit a complex behavior in an organizational way

and also to make the system operate strictly by emer-

gence. Groups of agents will not be in any way func-

tional but rather versatile, ambiguous and they will be

able to create emerging groups using communications

between them, reifying some specific roles.

A generated aspectual state expresses some signsaccording to the semiotic sense of the term (Peirce

1984), the signs being correlated to characteristics of 

real world objects. These basic elements come from

ontology representing the knowledge of the world for

the robot, and are located in its organizational mem-

ory. But the basic dynamic elements associate them-

selves to form the new current emergent state: that is

not a static ontology but a dynamic one. And it is

therefore necessary to proceed to an in-line control of 

their activity to force the set of active agents to reach a

coherent and adapted global state. It is not about

forcing the organization so that it reaches a predefinedstate, but to force it locally, in some movements of its

dynamic entities, so that it reaches an admissible state

according to what it contains and also to its geometrical

form. The control will be achieved according to the

morphological characteristics of the aspectual organi-

zation, according to shapes that this organization takes

while activating its entities. It is about defining an

artificial mental map (Fig. 4) by the aspectual organi-

zation movements, controlling its expansion.

Communications

Knowledge

Actions

Behavior

Fig. 3   The general structure of an aspectual agent

250 Cogn Process (2006) 7:245–267

 1 3

Page 7: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 7/23

It is common, in artificial intelligence, to select

concepts using some mechanisms with meta-rules. That

is not the case with the system we develop. There are

some available agents, named and qualified with their

semantic roles and coming from the organizational

memory, by the way of ontology of many domains

using the language and the psychology of robot’s

behaviors. Some of these agents tempt to spread out in

a group, and there is the emergence of structured

coherent ones, meaningful of a mental and physical

state of the global system, that is going to make the

current point of attention in the representation. All

these system agents will notbe simultaneously active

and most of them remain fixed, but may be solicited

while activating some others by their accountancies

(Ferber 1995).

Each generated current representation describes it-

self therefore in two ways:

•   At the aspectual agents’ level that clearly expresses

by the way of their local semantics expressed in

their states and behavior, the things to which they

are going to make allusion,

•   With some shapes that are the geometric confor-mations of the aspectual agents’ activities, in a

specific space we describe further.

The concept of shape is strictly a geometric one: a

shape will be seen like some geometric figures (Fig. 5,

6) as graphs or polyhedrons having some specific

geometrical characteristics (Cardon 2004). Our notion

of representation corresponds to the one of history

for the autobiographic agents of K. Dautenhahn

Fig. 4   Communicationsbetween aspectual agents inthe prototype: the mentalmap at the organizationallevel

Aspectual organization

Morphologic representation

Fig. 5   The morphologyexpressing the form of theaspectual agents organization

Cogn Process (2006) 7:245–267 251

 1 3

Page 8: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 8/23

(1997), but where the concept of agent rather means a

robot.

The characteristics of a system generating the current

representations

According to the previous general presentation, we

now focus on the characteristics of the representation

system.

The organizational memory

The generated representation is a construct indeed,

each time formed with an organization of aspectual

agents whose behavior expresses a set tailored to the

current aimed situation, the situation that has been

wanted by organizational tendencies:

•   There are interface agents interpreting information

coming from sensors, interpreting them while

transforming the numeric indications in semantic

features,

•   There is a current memory of the previous gener-

ations, in order to respect the continuity of the

produced states, the artificial thought produced.

This construction is allowed by the existence of two

fundamental and original structures:

•   An organizational memory,•   A system of specific in-line control of the repre-

sentation acting during its generation.

The robot’s artificial brain must have so-called

‘‘artificial real life experience’’, i.e., ‘‘ve cu’’ concept

according to P. Ricoeur (1990). It has an event-dri-

ven memory, a memory of events, introduced ele-

ment by element with the numerous links at the

construction stage of the system, but dynamically

enlarged by its actions and behavior. This memory

retraces facts, situations, events, knowledge, cases,

doctrines.... It is based on ontology giving the system

the elements of a specific knowledge and culture.That is not a factual memory but an event-driven

one, under a very particular shape allowing the soft

impairing of each extracted and used element and

putting it in the specific current context adapting it

indeed.

That is a memory delivering some past facts into

the current context for each recall. This memory is

structured to be modifiable by change as consequence

of each present state. Especially, its structure must

permit that each extracted fact is systematically cast

into a form corresponding to the context of the call,

and to inflect also the memorized structure: each callof a fact is a modification of it. A memorized fact is

only a sign that spreads out and recomposes itself in

the new context of generation of the representation,

with some specific shape. This memory will evidently

be based on the aspectual agents organization, but

will be structured into another level. Its original

architecture makes the object of a patent deposit into

USA (US Overt Trademark and Office n2059, 2005).

Fig. 6   The emergence of aspectual groups of aspectualagents linked as graphs on thescreen of the prototype

252 Cogn Process (2006) 7:245–267

 1 3

Page 9: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 9/23

The representation system is going to call therefore,

or to undergo by necessity, on the memorized events

and facts of the organizational memory, whose ulti-

mate components will be some proactive aspectual

agents. These awakened agents will tend then, because

that is their nature of agent, to join groups forming the

core of the current representation and inflecting itaccording to their roles. This mechanism we develop is

a kind of work of processes by tendencies (Campagne

2005).

This mechanism of aggregation of aspectual agent

activation operating socially according to their roles

cannot be left without any control. A miraculous

emergence does not exist. There will be in the conduct

of the representation system, in the semi-liberty left to

the aspectual agents, a strong morphological control on

the agent’s aggregations. This control will be carried

out according to semantic agent’s indications with their

roles, using organizational patterns of control (Fig. 7).These patterns control the morphologic development

of the aspectual aggregations, using components co-

activity and specific semantic notions of values, truth,

rights, relevance.... The robot’s artificial brain will be

constructed to generate some artificial classes of con-

cepts, according to its organizational memory but with

limitations introduced at the construction stage. This

specific system of control is also the object of a patent

deposit into USA (2005).

But the robot, in accordance with its body, must

have an equivalent of the mammalian meso-limbic

system: it will feel emotions corresponding to releasesof impulses systematically and solving the embodiment

problem. It will generate artificial thought always in

accordance with emotions.

Fundamental tendencies and aspectual state

The current representation generation is not an

automated reaction to some input. To address the

incentive of the current aspectual state construction, it

is necessary to specify the notion of fundamental

tendency. A fundamental tendency will be the con-

structivist transposition of the impulse concept in the

sense of S. Freud (Freud 1966). Then, we will have to

represent such a concept in the computable way.

Fundamental tendency

Adaptive systems are designed so that they must

satisfy the general needs that presides over their

behaviors and their representations of things in a

decisive manner. These general, multiple and con-

tradictory needs will be called the fundamental

tendencies of the robot. These tendencies reside in

the representation system as major modifier ele-

ments, operating at the level of the morphology of 

the representation system.

These tendencies are the reasons driving the

behavior of the robot, first while reorganizing its cur-

rent representation and secondly allowing the action in

an adapted manner in its environment. With the

necessity to act imposed by these fundamental ten-

dencies, the robot will led to solve some various types

of problems and to solve them in an interested manner.

Fundamental tendency, organizational definition

A fundamental tendency is an organizational slantinflecting the construction of the aspectual state,

while giving it some specific characteristics about the

real context appreciation. This slant will operate, via

the morphology, on the set of entities constituting

the aspectual state. It is therefore a promoter ten-

dency of the generated representation.

We consider the fundamental tendencies as numer-

ous and contradictory. In the case where the system

would only have one fundamental tendency each time,

or where all the active tendencies would be strongly in

agreement as a hierarchy with a permanent dominantone, the system would look like a reactive one with an

explicit goal operating with multi criteria choice.

The system must adjust its tendencies while con-

forming the plastic structure producing its aspectual

state with some constraints on the organizational lib-

erty degrees, allowed with its basic elements. The

behavioral consequence of the generation of such a

representation will be itself qualified of   adaptive

(Meyer and Wilson 1990).

CurrentRepresentation

Pattern foremotions

Pattern for logicdeduction

Pattern for ask the questionsPattern forrecognition

Pattern formanagement of 

memory

Otherspatterns...

Fig. 7   The pattern of control of the representation system

Cogn Process (2006) 7:245–267 253

 1 3

Page 10: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 10/23

The fundamental tendency notion will be seen like

a change in the generation of each current aspectual

state. It will be necessary to get away from the

functional specifications for the current state emer-

gence, to feel the notion of leeway and quantifiable

subjectivity.

Architecture of the representation system

We are going to specify the architecture of the system

generating the representations.

The co-active components

This system is composed of specific subsystems

organically bounded by another processing their strong

coupling (Fig. 8). All these subsystems have the multi-

agent architecture.

1. The organizational memory. It is an event-driven,

structured, dynamic memory, representing the ro-

bot’s ‘‘artificial real lived’’. This system must

organize facts, events, phenomena, cases... that it

can use to go to the current generation, which is its

current artificial thought. It will modify its memo-

rized traces while using them and it is clear that the

extracted knowledge will always be presented

according to the context and is never merely

symbolic. Yet, knowledge about facts and events

will be entered in the system under objective

shape, but in an adapted structure allowing dy-namicity,

2. The system of construction of the current state,

composed of the aspectual agent organizations,

expressing the aspects of the different things of the

world represented. These agents are rational and

proactive entities that alter themselves while

functioning. Each of them produces results that are

inputs to others. This system is therefore a network

of processes in re-conformation, endowed with

semantics, which tampers itself while functioning,

according to the sense of Atlan (1995). It builds

features about the perceived current situation

structuring these agents towards some specific

shapes during their activation,3. The system of morphological control of the con-

struction of the current state view as a geometric

aggregation of processes, expressing in a strictly

geometric manner the way the results of compu-

tations and the computations themselves are done

by the agents and what precisely they form in the

whole, in the organizational sense. This system

expresses the conformation produced by compu-

tations and communications: it represents activities

in a dynamic graphs space (Campagne 2005). This

system will be able, according to the conformation,

to really globally act on the actions of the aspectualagents, to control the behavior of the agent’s

organizations,

4. The central process, linking in an organic relation

the system of construction of the aspectual state

with the morphological system, dragging the two

systems in an continuous dialogic action we called

the  mirror action   (Cardon 2004), where a modifi-

cation in one of the systems immediately drags a

modification in the other one, and so forth, until

reaching a stable state in the co-activity of the two

systems,

5. The system of commitment, leading to the gener-ation of a new aspectual state, able to make active

the aspectual organization and the morphological

system without predefined goal as in the reactive

systems. This system expresses the intentionality

and the liberty therefore to organizationally act. It

expresses the intentionality and essentially oper-

ates at the morphological level.

6. An assessment and spatiotemporal change of the

central process that allows, according to the

rhythm of entities activity, to detect and to achieve

modifications, either brutal as organizational rup-

tures, either slow and periodic, as pulsations, andthat will be the  mental shape  of the artificial emo-

tions leading to representations and subjective

behaviors.

7. The emotional system, external to the representa-

tion system, operating as the meso-limbic one,

altering the current representation construction in

the representation system with the introduction of 

impulses. It strongly uses the morphologic and the

commitment systems, disrupting them.

Organizationalmemory

MorphologicControl fordescription

Anticipationmorphology

Emotionalsystem

Aspectual

organization

CentralProcess

Fig. 8   The representation system and its components linked bythe central process

254 Cogn Process (2006) 7:245–267

 1 3

Page 11: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 11/23

Such a system that evolves to represent and to

analyze the conformations of all the active basic enti-

ties can have three types of functioning:

1. The morphological control system only selects the

most active and most applicable entities, according

to metrics it can use (Campagne   2005). Then we

are in the case of a system producing some statesadapted to the current situation, using some

selector operating at the meta level.

2. The morphological control system radically drives

the active basic entities in whole, their global

structure is fixed, but the control allows the basic

entities the maximum of liberty. The system is then

similar to a neural network operating by reflex on

an autonomous entity substratum.

3. The morphological control system is co-active with

the aspectual entity organization, the basic entities

organization continuously evolves. The system of 

commitment allows a free-will, because the repre-sentation can develop its action as it wishes. We are

in the very delicate case of the co-activity that is the

foundation of the adaptive systems generating

artificial thoughts. That is the case of our system.

The different organizations of the agent system

In the representation system, basic entities are strictly

rational weak agents: the   aspectual agents. These

agents have to compose, by their activities and their

communications, in fact by their aggregations, thecharacteristics of the current state. The choice to use

agents is reasonable, leaving behind the rather reactive

level of the formal neurons architectures.

We set, as a realistic hypothesis, that the consider-

ation of symbolic elementary entities endowed of some

significance at the knowledge level, will permit the

acquisition of a constructible and intelligible current

representation. We state ‘‘entities endowed with their

own tendencies to the significance’’ and not entities

reifying structural concepts and managed with meta-

rules. The existence of such a basic pro-active entityhaving minimal significance characteristics was the

central hypothesis of L.S. Vygotski about the emer-

gence of the thought in the brain (Vygotski 1985). We

follow this hypothesis.

The system will be composed therefore of six agent’s

organizations (Fig. 9):

•   The interface agents, bound to sensors and actua-

tors,

•   The agents of the representation system:

•   The aspectual agents for the construction of current

state,•   The morphology agents,

•   The analysis agents, analyzing the morphology and

intervening on the aspectual agents about the

construction of the current aspectual state,

•   The organizational memory agents.

•   The emotional system agents.

The analysis agents  are going to provide a cognitive

view of what has been expressed by the geometric and

semantic information coming from the morphology

agents, above the aspectual agent landscape, that is an

interpretation of dynamic graphs indeed.Let’s note that this agent level will have a complex

structure because it will be a co-active set of agent

organizations—aspectual, morphology, analyzed as

InterfaceSystem

Aspectualsensors agents

Aspectualeffectors

agents

Aspectual

Agentssystem

Morphology

Agentssystem

Analysis Agentssystem

System of Representation

Agents of theorganizational

memory

EmotionalAgentsystem

Fig. 9   The five organizationsof agents of the system

Cogn Process (2006) 7:245–267 255

 1 3

Page 12: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 12/23

indicated in Fig.  9. In fact, it will not be sufficient to

represent the generation of a representation with

emotional constraint using only these rational agents

and we should use other very sophisticated techniques

of modification of the construction of the emergent

form in the aspectual organization, by the diffusion of 

‘‘hecklers’’, i.e., no-deterministic messengers, in the

aspectual agents. The stability of the system will bemore difficult.

Vectorial expression of the agent behavior:

the morphology

To control the aspectual organization, we represent

actions, aspectual agents movements, with some type

of active forms that are essentially the geometric

characteristics representing the dynamic significance of 

theirs activations. This interpretation brings us closer

to morphogenetic spaces of  R. Thom (1972), with their

characteristics of regular or hard modification. The

specific type of forms we have to use on the dynamic

graphs is the key to the problem of significant emer-

gence, linking geometric forms to semantic and is an

important patent deposit in USA (2005).

The main problem in the manipulation of a large set

of agents is the representation and the control of the

behavior of its organization. We have proposed a

solution for the control that will be the foundation of 

our morphological hypothesis. One considers that the

global significance of activities achieved in the aspec-

tual organization can also be determined in a geo-

metric way, by shapes spreading out in a dynamic space

whose measures represent the results of computations,

the speed of computations, the interactions between

computations, the possible collisions of competitors

computations, the modifications of the functions, their

relative importance and their cooperation.

We represent the behavior of a set of agents inde-

pendently of the problems they solve therefore, out of 

the specific semantics. We are going to associate to the

notion of behavior of agent’s group the notion of 

geometric shape.

Shape

A shape is an element of  Rn with a Euclidian metric,

associated to a semantic space as a space of words.

Let’s notice that the aspectual agents being some

rational entities, it is possible to associate with them a

precise notion of organizational state.

Organizational state

An aspectual agent’s organizational state is the

meaningful general characteristics allowing the

description and interpretation of its current situation

at each moment in the time and permitting to pre-

dict its behavior.

It is clear that one should always bring back each of 

this meaningful characteristics to an element of  R. So,

an agent’s state will be a point of   Rm if there are m

characteristics defining the agent’s state. The problem

is to determine these characteristics.

 Activity map (Lesage 2000 )

The activity map of an aspectual agent organization

is a representation of the complete set of meaningfulcharacteristics of the agent behavior for a time.

The activity map will correspond of the mental

map of a cerebral activity. To use the notion of 

shape, i.e., to represent an activity map by geometric

shapes, it is necessary to first represent each agent by

a vector of state. The characteristics of the agent’s

movement can be defined from the three following

categories:

1. The agent’s appearance, view from the outside, i.e.,

by the other agents. This category characterizes theagent’s situation in relation to its environment,

2. The agent’s internal appearance in front of its goal

to reach. It is about its state in relation to goals that

it must imperatively reach again and in relation

with the roles that it must necessarily assure,

3. The agent’s state at the level of its own working,

i.e., its internal dynamic. That is the measure of the

quality of its same organization and its variation in

time.

Then we are going to define the measure of an

organizational space with six characteristics deducted

from these three categories:

(1) According to the agent’s appearance in its

immediate environment that is according to its own

situation in relation to its acquaintances and its envi-

ronment one keeps the three following notions:

•   Supremacy: that is the measure of the fact the agent

is either located in position by force in the relation

to its acquaintances, that it has or not many allies

and if its enemies are or not powerful.

256 Cogn Process (2006) 7:245–267

 1 3

Page 13: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 13/23

•   Independence: that is the measure of the actions

agent’s autonomy, specifying if it is necessary for it

to find allies to reach its goals.

•   Persistence: that is the measure of the agent’s

longevity, its life that can be brief or long.

(2) According to the appearance of the agent’s

internal structure, that is its state in relation to its as-signed goals, one keeps the notions of:

•   Easiness: it is the measure of the fact the agent

reached its current state with ease or not. For

example if an automaton expresses the agent’s

behavior, this characteristic measures the ease of 

transition, the possibility to go back. This indicator

specifies also the support or the resistance met by

agent’s team to other agent’s team to reach its

goals.

•   Speed: it is the speed the agent has to reach its

assigned goals. It is for example the speed of getting

over the states of its behavioral automaton whenthe agent possesses one.

(3) According to the appearance of its internal

organization, that is its working view as a process of 

communication between its modules and according to

its structure, one keeps the following notions:

•   Intensity of the internal activation flux: it is the

measure of the quantity of information exchanged

between its internal components and allowing to

lead to a visible activation,

•   Complexification: it is the measure of its structural

transformation generated by some dysfunctions andhaving required transforming of some elements of 

its structure. This measure determines if evolution

is a simplification or a complexification of the

agent’s structure.

•   Organizational gap: it is an assessment taking into

account the ability of the agent’s structure to

achieve some actions and lead the agent to have

an appreciation of the distance between its internal

state and the global state that it distinguishes in its

environment. It is the appreciation of the adequacy

of the agent’s situation in its world.

These eight characteristics can be represented

therefore in R8 and permit to associate to every agent

what one calls its   vector of aspect   as element of   R8,

which is an  organizational shape   (Fig. 10). These as-

pects are represented, while regrouping the agents

according to their activities and communications using

some appropriate metrics, by dynamic graphs where

we study the conformation and the change of confor-

mation (Campagne 2005).

With the clear notion of specific organizational

shapes, we can interpret them matching in semantic

space, finally linking geometry and semantic:

Semantic interpretation of the shapes

According to the specific characteristics of the

organizational shapes we can match the geometric

specificities of the forms in a semantic space, usingthe semantic contained in the agents as roles and

knowledge rules.

The notion of commitment: why the current

representation must emerge?

At this stage, we have an agent representation for

semantic and geometric forms, this agentification

coming from ontology. We have a dynamic system with

a morphological control. Then, it is necessary to specify

what triggers the central process and what makes thisprocess stop one moment, in a specific configuration of 

the aspectual organization rather than in another one.

Taking into account these constraints will noticeably

complicate the representation system schema. It will be

necessary to define some structural and organizational

extensions.

The difficult questions about the construction of the

system are the following: what leads the system to start

for a new representation? What is the reason behind

getting the system into action and allowing the stabil-

ization of its conformation one moment? What is this

quasi-stable reached state?It is not about defining a mechanism of automaton

type, that would get the system in action upon recep-

tion of a well-known stimulus and that would produce

a compliant answer in a good manner. It is not even to

define a system that would create action by chance,

while based on a stochastic mode. One must remember

the deep answer to the main question ‘‘why do we

think?’’, given by M. Heidegger in a philosophical

setting a long time ago (Heidegger 1959).

Supremacy

Independence

Persistence

Facilitated

Speed

Intensity of the internal activation flux

Complexification

Organizational gap

Fig. 10   Organizational characteristics of the aspectual agent’sbehavior

Cogn Process (2006) 7:245–267 257

 1 3

Page 14: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 14/23

What leads the representation system to hire its

central process will be first  an internal necessity  at the

morphologic level? The system will not be stable in the

sense that it must constantly activate its central process

to reach a state of consistency. The notion of state will

be a generalized one, unifying calculation and repre-

sentation of this calculation. The state of quasi-stabil-

ity, the stability far from the equilibrium state, is onlytemporary and it immediately leads to the production

of another state, quasi-stable a short moment, and so

forth. The consistency will be insured by the intro-

duction of a rational structure forcing the aspectual

activity, according to its shape and its semantic. There

will be components operating a control at the semantic

level therefore: projection of the morphology of the

aspectual organization on semantic spaces with in-line

analysis. With these characteristics, the system will be

continually active, constantly producing states of 

ephemeral stability with different rhythms (awake and

sleep periods for example).We don’t choose to represent the reason to activate

the central process with an ad hoc structure, of which

we would not know what or why it would be activated.

We choose to place the reason of the run into a specific

activity in the morphological representation system

itself. What is going to commit the system to run its

central process towards some current state it does not

know at this instant, is the fact that this commitment is

a latent indication present in the morphology, in the

‘‘presence of the present’’ of the system, using a phe-

nomenological formulation (Heidegger), and therefore

a reason that will be out of the semantics at this time.Very concretely, it means that the ‘‘shutter release’’

activating the central process in a direction of activity

will be a specific   anticipatory system   situated in the

morphological one. The morphological system de-

scribes the current representation of the aspectual

organization and must also generate the commitment

at a time to operate towards a new representation. The

commitment towards an activity is expressed as a

general form the representation system can have. This

anticipatory system constraints the aspectual organi-

zation, it has ‘‘a temporal step’’ of advance on the

global aspectual activity that the central process isgoing to reduce. Then the central process can run its

systemic loop driving a commitment towards its effi-

cient realization, putting into concordance the aspec-

tual organization, whose form is destabilized by this

commitment coming from the anticipation, with the

morphological system.

The representation system contains therefore, be-

sides the morphological representation, a system

expressing a commitment like a shape that orients the

aspectual organization and therefore modifies its active

organization in some direction given by the tendency of 

the commitment. That will be the   morphological 

anticipation system. The most significant in the repre-

sentation system and in the organizational memory, the

most important in the inputs, the most important

among its latent geometric traces, the more in tension in

the organization, bring on the representation of whatwould be actually computed by the aspectual organi-

zation, in the dialogic loop of the central process. And

to be coherent with a non-reactive architecture, the

anticipation system will be complex, containing always

several commitments that will be possible to serve as

initiators for the central process. These commitments

will be produced by the fundamental tendencies evoked

before, according to the organizational memory.

The architecture of the morphological system is

therefore the following:

1. A morphological system of description of theaspectual representation, that geometrically ex-

presses the current state of the aspectual organi-

zation,

2. A morphological anticipatory system, generating

some shapes allowing the anticipation of the

aspectual organization with specific geometrical

conformations.

These two morphologic systems, that are multi-

agent systems, oppose themselves, alter and coordinate

themselves until they constitute only one coherent and

compliant geometrical representation of the aspectual

organization. The anticipatory system, as its nameindicates, hires the central process into a direction,

according to ‘‘certain aim’’ P. Ricoeur (Ricoeur  1990)

would say and this process, in its organizational back-

ward and forward motion between aspectual and

morphological systems, reinforces the commitment,

tampers it and, maybe, transforms it.

Now that we know that the representation system

ineluctably must destabilize itself to function according

to internal latent tendencies, what makes it stabilized?

The answer will be clear. When there is conformity

between the destabilized morphological space by its

commitment and the aspectual organization, whenthese two systems are coherent at the point of view of 

the central process, when the central process leads to

an organizational stationary point, the central process

stabilizes organizations one instant.

But this state of relative stability must be memorized

into the organizational memory. This memory is a

complexsystem in the way that it is a system ‘‘sensitive to

its initial conditions’’ and the introduction of a new ob-

 ject is itself a destabilization. This insertion immediately

258 Cogn Process (2006) 7:245–267

 1 3

Page 15: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 15/23

leads to the explicit construction of a new current state,

since the reduction of this destabilization is the running

of the system’s main principle. The fact that the system

generates a quasi-stable state that will be a new internal

object in the active organizational memory, ineluctably

leads to the destabilization of the representation system

by the fact that its central process is always in activity. No

state of equilibrium exists, the fundamental tendenciesare multiple and are immediately active on each ob-

servable state. The anticipatory morphology leads the

representation system in a new reorganization, accord-

ing to the insertion of a new object into its memory, and

the system continues its successive organizational

transformations. Generation of artificial thoughts is a

process like a fall without end.

So, we can set a precise definition of a system gen-

erating artificial thoughts:

Definition of system generating artificial thoughts

•   Such a system is a software system binding an

unceasing informational stream produced by a body

or by many bodies.

•   The system is composed of a lot of computable

entities, each of them having a cognitive foundation

upon a semantic level, having the property of 

proactivity, of autonomy and structural evolution

and each entity can be systematically recombined

with some others.

•   These entities form a very structured organizationlike a set of artificial real life experiences, i.e., a

‘‘ve cu’’, that is an organizational memory com-

posed of strongly linked forms that can continu-

ously increase themselves by re-combination,

•   The system has a distributed control level on its

organizational memory, expressing multiple funda-

mental tendencies that are impulse controls, at the

rational and emotional meaning. These controls

have the form of morphological constraints on the

aggregations of the basic entities.

•   The system, at the level of the morphology of its

entities, can construct, of its own and according toits current global state and tendencies, emergences

of internal forms.

•   The generated emergences are geometrical objects

composed of many proactive structured entities,

linked to the objects of the real world, in the way of 

a relation ‘‘type of form–type of significance’’. An

emerging form always produces the activation of 

other objects in relation with this one, allowing the

continuity of the artificial thought generation.

•   The system has always some relations between each

of the emerging object it generates and the other

semantically or emotionally close objects of its

organizational memory. All the generated and

manipulated objects are linked ones.

•   The system can observe, manipulate, and play

indeed with the internal objects it has produced at

each time and, by this on-line process on its owninner objects, it can have the sensation of thinking

itself about these objects, in a morphologic and very

dynamic space.

•   The system has a complex dynamic organization

having, in its process of generation of emergent

forms, the complexity of the number of parts of a

set.

The artificial emotions

We are going to define the architecture of a specificagent organization managing the production of emo-

tions like pleasure or pain. The fundamental tenden-

cies, to be able to finely modify the current

representation, will be expressed as the characteristics

of some organizations of specific agents on the repre-

sentation system, modifying the geometrical shape of 

this organization.

Self-adaptive component

That is a software component whose actions are inthe way of its own satisfaction and where the results

of these actions modify its internal organization: the

component evolves under its own tendencies.

Artificial emotion

An artificial emotion will be essentially seen as the

rhythm of some self-adaptive components func-

tioning in an emerging way in the aspectual agent

organization, functioning by synchronized loops

whereas the system undertakes some external typi-cal action. In this way, the system and the robot

adopt a specific behavior.

Emotion and dynamic of the agent’s groups

The fundamental tendencies will be global character-

istics leading to the re-organization of the aspectual

agent’s organization. It is a strong organizational

Cogn Process (2006) 7:245–267 259

 1 3

Page 16: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 16/23

hypothesis, since we will represent tendencies by types

of shapes on the movements of the agent’s organiza-

tions.

The geometrical hypothesis about tendencies

The geometrical hypothesis, with regard to thefundamental tendencies expression, states that the

tendencies can be represented by some kinds of 

deformations of geometric shapes in the dynamic

space expressing the movements of aspectual agents’

organization.

In fact, we have to deform the aspectual organiza-

tion indeed, giving a specific rhythm to this re-confor-

mation. According to this point of view, an emotion is a

geometric alteration of the aspectual activity pertain-

ing to specific behavior of the robot.

The architecture of a system generating emotions isradically different of an input–output one, running

through some predefined steps. We have to define the

specific notion of ‘‘control of its own’’, i.e., a control

managed by the continuous feedback of the action of 

control itself. We use such a control for a system pro-

ducing fuzzy states like the emotional ones, composed

of proactive agents, generating internal cycles of 

activities with specific rhythms, according to the actu-

ally observed process of emotions in the brain and

corresponding to different types of actual actions or

behaviors. This architecture will be founded on the

aggregation and the breaking of aspectual agent’sgroups.

Expression of an agent’s group

Because they are proactive, organizations of aspec-

tual agents can be represented in an organizational

way, where the form of the activities and the links

between agents directly lead to an actual activity of 

the system, in a continuously adaptive action and

reaction with the environment.

Basic computable component producing

the emotion: the computable oscillator

The biologic presentation of the emotional activity

describes the existence of neuron domains speeding up

in some loops, working each other’s for the propaga-

tion of flux of activation. We must build a system

where the form of activity is made of emergent

feedback loops, positive, negative and additive loops.

We precise the basic architectural elements of the

system with the following component: the  computable

oscillator , a computable interpretation of the well-

known Wilson–Cowan oscillator (Wilson and Cowan

1972).

Computable oscillator 

A computable oscillator is an organization of 

aspectual agents whose activation quickly forms and

by its own functioning, many cycles of activity with

specific intensity and speed. It has a shutter release

on the form of specific emotional agents.

Such an oscillator is an organization of weak soft-

ware agents coordinating them, modifying their links,

synchronizing with some others and acting on the

aspectual organization. The oscillator leads to the

emergence of an aspectual agent structure, distinct of 

the other aspectual agents of the organization. Such a

group must emerge to control itself and the other

groups. The organization passes from a state to another

where a looped process transforms a group of agents

into an oscillator. Mathematically, this group is an

emergent sub-graph in the coupled activation graph of 

the agent’s organization. Such an emergent oscillator

leads to a specific adaptive activity of the robot and

must control other attempts of emergent loops. Its

shutter release is composed of specific emotional

agents.

The aspectual agent organization will be formed of a

structured set of such computable oscillators, allowing

a global and local backing and the process inhibition or

stop. There is no central controller in this system that is

distributed among the emerging components and in

their synchronization using negotiations. The system

will function, after the action of the shutter releases, by

a control of its own and a regulation of its own also, of 

its oscillators with local limits cycles and with a more or

less conservative faculty.

The system state allowing artificial emotion

To raise an emotional process, i.e., a particular activity

of the agent’s organization with a typical correspond-

ing behavior of the body, it is necessary to start from a

state that is clearly a neutral one. We will call ‘‘ low

 state’’ the state without any emotion or achieving an

automatic behavior. In this state, only some specific

aspectual agents are operational.

260 Cogn Process (2006) 7:245–267

 1 3

Page 17: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 17/23

Low state

It is the state where the aspectual organization is in

an automatic activity or achieves a preprogrammed

automatic reactive action.

In the low state, the inputs will not trigger anythingemotional and will amount only to reflex actions. We

have to give the autonomy to the groups of aspectual

agents, in the sense of a general functioning with

internal loops. The system must adjust to consolidate

for a while its structure: some groups of agents are in

progress and will supplant some others according to a

rhythm, launching some external actions according to a

specific fashion. Others agent’s groups can work in the

background, without orders sent to the interface

aspectual agents, but they would be later in an emer-

gent state (phenomenon of domination). The adequacy

between the external flux of information and thecomplex activation of aspectual agent’s organization

will allow the emergence of some self-adaptive com-

ponents.

The development of the emotion: from the signal to

the incentive

We set the hypothesis that each sensor that is not a

strict alarm sensor corresponds, in a way, to an artificial

 sense  in the system. So we have:

•  Sensors or routines of vision, temperature, touch ormotion tracking...

The system feels its environment by different means

that we call it ‘‘artificial senses’’. For each sense will be

associated, whatever it is, to a state of activity:

•   Active or not active sense, with notion of intensity

if the sense is active.

•   Active sense in an admissible way (towards the

pleasure) or a no regular way (towards the pain).

These sensory characteristics can be easily repre-

sented into the aspectual agents bound to some actu-

ators agent’s, the interface agents.From some   signal   considered like a starting shape

putting in action the inputs of the system, there is

generation of an  incentive, which is a general tendency

towards action. This incentive will be the shutter re-

leases on the aspectual organization. It makes this

aspectual organization focus towards some goal, with a

plan of action defined into the analysis agents, a global

plan but with local indicators of sub-plans on different

aspectual agents. The system has to maintain this plan

during a time without tampering it too much and

especially while valuing it. The incentive is therefore a

meaningful modifying activation of the current low

state with direct effect into the different organizations

of agents leading the representation system towards a

typical behavioral activity aiming at some goal. For

example, this activity could be for the robot ‘‘to take a

pen for to make a fuss of it’’, ‘‘to drag quickly on theright of the window’’ for its satisfaction and pleasure.

There is a commitment to an actual physical activity

corresponding to the modification of the aspectual

organization. This signal is going to release:

•   According to the state of the system,

•   According to the environment characteristics,

•   According to the complexity of the emotional

system,

•   In an irrepressible manner, and not randomly.

The signal would be generated by a specific orga-

nization of agents having no direct contact with theaspectual ones, but while taking into account their

morphologies. In each case, the signal will be trans-

mitted to aspectual and analysis agents to lead the

system in its entirety towards a goal. We will represent

the set up of this signal while using the fundamental

organic tendency paradigm, like an emergence in a

specific agent organization, taking only its information

from the morphology agents (Fig.  11). Then the origin

of the signal is considered as virtual.

Once this signal is activated, the representation

system has to generate a global behavior. The signal is

sent therefore to the agent organization of analysis,activating a pattern of requisite typical behavior. This

pattern of behavior, in fact a plan (flight, approach,

seizure, restraint, gesture of a member...) is managed

by the analysis agents, in a context according to the

system’s current possibilities, and taking information

from the aspectual agents. That pattern leads to the

Fig. 11   The agent architecture of the system generating theemotions

Cogn Process (2006) 7:245–267 261

 1 3

Page 18: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 18/23

generation of a new specific behavior taking into ac-

count the represented situation. This plan is initially

not very precise but it becomes clearer in its progres-

sion. It explains:

1. The contexts of the past activities of the robot and

the histories referring to that current case,

2. The immediate undertaken action,

3. The going forward possibilities if the plan either

succeeds or fails.

This global plan of action is transmitted to aspectual

agents and then generates a lot of specific plans (with

their local parts) for the aspectual actuators agents.

This generation is negotiated quickly between them

and corresponds to a behavioral scheduling. There are

therefore behavioral aims defined in a global manner,

with indications driving the local injunctions to aspec-

tual agents linked to actuators. One will say that there

is setting up of an  incentive: an objective splitting into

operational sub-objectives, into a specific aspectual

agent’s activity. This incentive will be expressed in the

incentive agent organization  and will lead to a specific

form in the system morphology (Fig. 12).

The artificial incentive

The artificial incentive is a global tendency ex-

pressed by a specific agent organization, the incen-

tive agents, activated from the observation of some

characteristics in the morphology system. This ten-

dency leads to a constraint on the aspectual orga-

nization, leads to a general plan of action distributed

into different groups of aspectual agents. This plan

brings about some specific plans with strong coeffi-

cient of intensity into all the aspectual agents. The

incentive is expressed as a morphological form,

which is the ‘‘ form on want to reach here and now’’.

Agents that generate this incentive, that causes it, are

the incentive agents, strongly linked to morphology and

aspectual agents of the representation system. These

agents speed up from a particular recognition sign inthe current morphological organization. In return, they

force some self-adaptive components to manage the

system towards some kind of functioning while first

soliciting analysis agents. They create a wanted form,

i.e., the defined current goal, in the morphology that the

aspectual organization will have to reach.

This organization of incentive agents is always active

in the representation system, but with more or less

intensity. It constantly observes the state of the mor-

phology of the aspectual agent organization and gives

out, at the good time, a specific signal launching the

process ‘‘incentive–emotion’’.

General algorithm   Begin

•   Continuous activation of the incentive agents

•   Morphological survey of the aspectual agents

•   Emergence of an incentive signal into the organi-

zation of incentive agents and of a specific form

(the wanted form) into the aspectual morphology

•   Activation of the analysis organization from this

signal and from the wanted form and generation of 

a behavioral pattern into the analysis agents

•   Development of the incentive

•   Generation of a typical behavior (form, goals and

sub-goals)

•   Injunction of activation to aspectual agents

•   Corresponding activation of the aspectual agents

End

We have to define in what manner the system is

continuously maintaining the incentive during that

time, notably defining a ‘‘center of the artificial plea-

sure’’ (Fig. 12). That will be the realization of the

emotions.

From incentive to satisfaction: the artificial pleasurecenter example

We focus on the emotion of pleasure, as an example of 

emotion. From a signal produced by the incentive

agent’s organization, the system generates an incentive

altering the analysis agent organization. That leads the

system towards some typical behavior. We have to de-

fine a control system allowing a very flexible behavior

allowing its adaptation holding a general organizational

Aspectual agent

Organizations

Agent organization of

morphology

Agent analysis

organizationPhysical expressed

situation

Agent organizationof 

Satisfaction

Incentive

Fig. 12   The general architecture of the system generating theemotions

262 Cogn Process (2006) 7:245–267

 1 3

Page 19: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 19/23

Page 20: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 20/23

antagonistic action of impulses towards quiescent

states (Freud 1966).

Algorithm of organizational emergence

or the representation system

The general operation of the system is the competition

between numerous processes aggregated into groups,

expressing some specific actions and leading to the

emergence of some specific aspectual agent’s groups,

i.e., the artificial current thought. The following

algorithm (Fig. 14) describes the generation of a new

state in the representation system, using competition of 

processes and using the concept of emotion as a

modification of the current emergence.The production of the representation system man-

aged by these competitor processes is the global state

of the aspectual agent’s organization, where the mor-

phological system and the aspectual organization are in

conformity, under an expressed commitment that al-

lows putting them into conformity. Such a state, steady

for a short time, will be called an   organizational 

emergence. That is precisely the temporary fixed point

InputsPhenomena

expressed by

aspectual

agentss

General morphologie

Current

morphologyMorphology

of incentive

Fig. 13  The form of therepresentation system and theinterpretation as an emotionof pleasure.

General Algorithm

[To activate the emotional system][To activate the central process]

General infinite loop

Begin

[ If the current emotion is adapted to the situation]

to preserve this emotion as much as possible

to distort the aspectual organization activity according to this emotion

[ If a latent tension exists in the anticipatory system]

to destabilize the representation system from this commitment,

to distribute the structural modification across the entire representation system

to modify the current emotion

[ If the system of morphology is active]

to activate and to modify the aspectual organization using the central process organically linked to the

morphological system

[ If the aspectual organization is active]

to activate the description morphology system using the central process

[ If the central process is again in action of synchronization]

to activate the aspectual organization and the morphologic system in a co-active way

to reinforce the anticipation residing in the anticipatory morphology system

to modify the emotional state as much as possible

[ If the central process doesn't distort the aspectual organization anymore]

to express the emergent current state that is the best for the current artificial thought

to memorize the current state as an internal mental object that the system can use later

to destabilize the morphology system while allowing the development of a new commitment from the current

state

to look for a new current emotion

End

Fig. 14   Algorithm of 

generation of meaning on therepresentation system

264 Cogn Process (2006) 7:245–267

 1 3

Page 21: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 21/23

of the central process, the emergence of the meaning in

the representation system, the concept of meaning

being understood and interpreted in a strictly organi-

zational way.

Emergence of the meaning

In the representation system, based on a central

process coupling aspectual organization and mor-

phological system, the emergence of the meaning is

the organizational fixed point of the central process

leading towards a specific conformation. This state is

memorized, as an internal object the system is able

to manage further, insuring its continuous learning.

It is obvious that the structure and the characteris-

tics of the aspectual and morphologic systems are more

complex in the actual implementation than the

description we have presented. Particularly, the exis-tence of latencies generating commitments into the

representation system, is not so simple and brought us

to introduce the concept of local attractors (Thom

1972), seen as fields that create the emergence of local

agent’s aggregations in the different agent’s systems.

The morphological system is not concretely repre-

sented as totally distinct of the aspectual one: the two

systems, aspectual and morphological. They are inti-

mately mingled in one vast agent’s organization

allowing the evolution, the creation of new morpho-

logic shapes and of next aspectual agent, without

external intervention. The system will be composed of different types of evolving agents. The model pre-

sented distinguishes between them, to understand and

to study them. The aspectual organization is an orga-

nization expressed by its morphology, and the mor-

phology is another organization, rather virtual,

allowing the aspectual one to have a specific behavior

according to its activity. It is therefore, for the repre-

sentation system, a strictly self-adaptive system, a

 sensitive  observer of its own activity, as J. Pitrat spec-

ifies about the consciousness property (Pitrat 1993).

Such a system allows the representation of the

artificial self-awareness notion. Its activity, very adap-tive to its own conformations, appreciable to initial

conditions constantly changing and coming from

internal changing manageable objects, speeding up by

commitment caused by the structure of its morpho-

logical system, permits to observe itself in its action of 

computation, in the generation of some explicit

aspectual emergences from the morphology of the

representation space. These explicit emergence states

will be able to be distinguished later as internal objects,

and so, while following its tendencies, the system will

be able to know itself, as the  author  of its reorganiza-

tions while causing them at will. The mechanism

allowing this self-observation, if we insert it for each

emergence, gives to the system some aptitude allowing

to make a strict distinction between itself and the other

things of the environment it interact and is co-active,

i.e., to be conscious of itself (Ricoeur 1990). The notionof perceived own embodiment then will become

clearer.

Results

The model and the developed concepts were applied

today to the behavior of an AibotTM robot (Camus and

Cardon 2005). The generation of representations pro-

totype, which is the first implementation of the model,

allows:

•   To capture information from the robot sensors,

•   To constitute an organizational memory and to

enrich it while using ontology, using Man–Machine

Interface,

•   To define a set of parameters for the system

generating emotions,

•   To generate on-line current representations as

organizational emergences, according to an organi-

zational memory, with an always active emotional

state and the perception of the things of the

environment it can intentionally manipulate,

•  To visualize this current state on a Man–Machine-Interface.

The system is experimented on Aibo ERS7 robot,

using a wifi connection between the system loaded

on a classical computer for a good performance and

the robot’s body. The robot can evolve cleverly in an

unknown environment, in which it must be able to

collect data, analyze, make-decisions, act according

to its self-defined goals. The approach of the deci-

sion-making allows to divide these capacities in six

crucial levels:

1. Represent a contextual situation.2. Direct the attention on particular elements (ob-

 jects or actions on the environment).

3. Feel emotions about these selected elements.

4. Build behavior action plans.

5. React on the focused object, on a systemic loop

(adaptation of the current plans).

6. Memorize the action when achieved.

These six levels are developed under the multi-

agent paradigm. The system is developed with the

Cogn Process (2006) 7:245–267 265

 1 3

Page 22: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 22/23

system Oz/Mozart. Oz is a multi-paradigm language:

scripting programming, object programming, logic

programming, constraints programming. It allows us

to use different paradigms such as the concurrency,

to develop a multiagent system with an asynchronous

communication using the message passing, or the

constraint programming to create different action

plans.Some results are accessible on the web site:

www.artificial-brain-project.com. We are in the process

to set up a new venture for the actual and complete

implementation of the system, including now the Freud

psychoanalytic theory.

Conclusion

It is really necessary for the construction of an artificial

consciousness, to have a specific theory of the mind,

and a really constructivist one. May be I shall writeabout my theory of the mind one day...

The emergence of meaning in the behavior of an

autonomous robot was presented like an ephemeral

stabilization of a complex system in an organizational

way, a system able to observe and feel itself using an

artificial brain. The meaning emerges like a re-con-

formation of multi-agent systems expressing them in an

imperative manner, as a geometrical observation of 

large groups of agents. The fact that the behavior of a

robot is capable of significance is founded on a process

of strong link between computation of multi-agent

organizations and semantic interpretation of the mor-phologies of their computable activities.

The importance of such a linking process, binding

the parts to the whole, binding activities of groups of 

agents to their significance expressed by morphology, is

really important. It is the fundamental principle of 

activity of the self-adaptive systems. It generalizes the

notion of feedback and systemic loop (Le Moigne

1990) and opens onto the autonomous system with the

notion of generation of expressive states as construc-

tivist emergence.

A system that generates a meaningful behavior

while using the capacities of its body, while proceedingan organizational emergence in massive multi-agent

systems, has a complex structure, conceptually and at

the level of the implementation, that satisfies the re-

marks of J. Searle on the notion of infinite background

(Searle 1992). But this organization can also produce a

representation of itself, of its own morphology ex-

pressed as an internal managing complex object, while

opening thus on the notion of ‘‘its own internal’’ ob-

 jects. Then the system can use its morphology as

commitment to act, and morphology is at the same

time geometric and cognitive, it is the sign of an

organizational semiotic, it summarizes the process of 

reorganization with its result. And the result of this

constructive self-observation can be delivered by the

system to each human observer. In this way, such a

system can express itself as the author of its internal

activities with its own intentions, rather than a gentlesystem displaying values merely to its human users.

Then the difference between such a system expressing

activities by itself according to its own intentions and

another one that would proceed to displays predefined

information in a well rational adapted manner is con-

siderable and makes a rupture in the field, very vast

today’s, of the computer science.

References

Atlan H (1995) Projet et signification dans les re seaux d’auto-mates, le ro ˆ le de la sophistication. In: L’intentionalite   enquestion. Vrin, Paris

Bertalanffy L von (1973) The orie ge ne rale des syste ` mes. Bordas,Paris

Brooks R (1991) Intelligence without reason. In: Proceedings of the 1991 international joint conference on artificial intelli-gence, pp 569–591

Campagne J-C (2005) Syste `mes multiagents et morphologie.PhD thesis, Universite  de Paris 6, septembre 2005

Cardon A (1999) Conscience artificielle et syste ` mes adaptatifs.Eyrolles, Paris

Cardon A (2004) Mode liser et concevoir une machine pensante.Vuibert, Paris

Cardon A (2005) La complexite   organise e, Syste `mes adaptatifset champ organisationnel. Herme ` s-Lavoisier, Paris

Camus M, Cardon A (2005) Towards an emotional decision-making system. Second GFSC/IEEE WRAC 2005: work-shop on radical agent concept, NASA Goddard Space FlightCenter

Clarck A (1996). In: Penser l’esprit, des sciences de la cognition a `une philosophie cognitive. Presses Universitaires de Greno-ble, Grenoble, pp 105–111

Clergue G (1997) L’apprentissage de la complexite . Herme `s,Paris

Dautenhahn K (1997) Biologically inspired robotic experimentson interaction and dynamic agent-environment coupling. In:Proceedings of workshop SOAVE’97, Ilmenau, pp 14–24,September 1997

Ferber J (1995) Les syste `

mes multi-agents. InterEdition, ParisFreud S (1966) The complete psychological works of S. Freud.In: Strachey J (ed) The Hogarth Press, London

Heidegger M (1959) Qu’appelle-t-on penser. Presse Universi-taire de France

Le Moigne J-L (1990) La mode lisation des syste ` mes complexes.Dunod, Paris

Lesage F (2000) Interpre tation adaptative du discours dans unesituation multiparticipants: mode lisation par agents. PhDthesis, Universite  du Havre, De cembre 2000

Mataric M (1995) Issues and approaches in design of collectiveautonomous agents. Robotics and autonomous systems16:321–331

266 Cogn Process (2006) 7:245–267

 1 3

Page 23: Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

8/13/2019 Artificial Consciousness, Artificial Emotions, And Autonomous Robots, Alan Cardon

http://slidepdf.com/reader/full/artificial-consciousness-artificial-emotions-and-autonomous-robots-alan 23/23

Meyer JA, Wilson WS (1990) From animals to animats: firstconference on Simulation of adaptive behavior. MIT Press,Paris

Peirce CS (1984) Textes anticarte siens, C.S. Peirce, collection.In: Philosophie de I’esprit (ed) Aubier, Paris

Pitrat J (1993) L’intelligence artificielle: au-dela `  de l’intelligencehumaine. In: De Be chillon D (ed) Le cerveau: la machine-pense e. L’harmattan, Paris

Ricoeur P (1990) Soi-me ˆ me comme un autre. Seuil, ParisSearle JR (1992) La rede couverte de l’esprit. Gallimard, ParisThom R (1972) Stabilite   structurelle et morphoge ne ` se. W.A.

Benjamin INC, ReadingVygotski LS (1985) Pense e et langage. Editions SocialesWilson MR, Cowan JD (1972) Excitatory and inhibitory inter-

actions in localized population of model neurons. BiophysicsJ 12:1–24

Cogn Process (2006) 7:245–267 267