Top Banner
Distributed Computing and Memory Within Parallel Architecture of Cellular Type: An outline of a “brain module” Based on US patents 7426500 & 864598
28

Distributed computing and memory

Nov 27, 2014

Download

Technology

New way of information processing with parallel computer architecture of a cellular type.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Distributed computing and memory

Distributed Computing and Memory Within Parallel Architecture of Cellular Type:

An outline of a “brain module”

Based on US patents 7426500 & 864598

Page 2: Distributed computing and memory

The philosophy behind EYEYE systemDouglas Hofstadter suggests that perception is inseparable from high-level cognition, where perceptual architecture is at the heart of cognition, a parallel architecture “in which bottom-up and top-down processing co-exist gracefully”. Ray Kurzweil’s pattern recognition theory of mind (PRTM) talks about “a basic ingenious mechanism for recognizing, remembering, and predicting a pattern, repeated in the neo-cortex hundreds of millions of times” and organized in a hierarchy of increasing levels of abstraction.

Page 3: Distributed computing and memory

The “brain module” that I am presenting starts processing perceptual input immediately, abstracts it in successive layers, and memorizes it as a network of active nodes, then transmits those networks as an input layer to other modules. It can recall, recognize, and predict all possible patterns of its local environment in real time. Permit me to present this project as a highly abstract algorithmic visual description, as if it were an actual 3D system. When elaborated, it may be implemented as software, hardware, or better yet, as an interacting multi-agent collection of nodes and modules.

Page 4: Distributed computing and memory

EYEYE systemIt is based on 64 visual pattern primitives, where binary positions are assigned to specific locations in a 2D structure. These “visual bytes” can combine into billions of combinations, and can represent any “image”. Expanded system forms meaningful abstractions at successive layers of complexity, and could create an internet “neo-cortex” that is able to deal with “fast data” in real time.

Page 5: Distributed computing and memory

Each processing cell (PC) unit can be considered as a tile, which is then repeated at different levels.

The PatchAPN= 1011111= 47Active Patch Number (APN) is formed from active PCs in order to identify where a given PC pattern was located.

242 possible patterns reduced to 128 APN patterns. This can be considered as vector quantization.

Page 6: Distributed computing and memory

PatchesCentral Patch is surrounded by 6 other patches, each representing a

3D direction

Horizontal leftBinary 32 position

Horizontal rightBinary 4 position

Vertical topBinary 1 position

Vertical bottomBinary 8 position

Horizontal backBinary 2 position

Horizontal frontBinary 16 position

249 possible patterns

Page 7: Distributed computing and memory

Multiple InputsHexagonal units have been chosen because they translates easily into three dimensional Memory Unit (MU). Three different inputs like image, sound, and metadata, can combine into an input layer, and each type can still be analyzed on its own.

Page 8: Distributed computing and memory

Architecture of a Memory Unit (MU)Each memory unit is made out of 64 hexagonal Dedicated Cells (DC) arranged into 8 truncated octagons. Each DC represents one primitive “visual byte”.The ends of each MU connect with surrounding MUs with corresponding DC numbers, making a seamless 3D MU complex.DCs communicates through edge binary places with 6 neighboring DCs, each different from it in only one binary place.

Page 9: Distributed computing and memory

Joining of Z axis MU unitsEach patch corresponds to one 2D as well as one 3D location. That permits input layer to be translated into 3D structure.Yellow hexagons represent DCs that have been activated by the Patch.

Horizontal backBinary 2 position

CenterBinary 64 position

Horizontal frontBinary 16 position

Page 10: Distributed computing and memory

Joining of the whole activated complex

These 7 patches create a difficult to visualize Memory Complex, which represents the input field which permits further analysis and classification.

It also solves the binding problem of connecting disparate patches into a unified 3D structure.

Page 11: Distributed computing and memory

Patch PCs send their output number to all DCs at once, connecting to DC-binary-position corresponding to its own patch position.DCs changes that binary-position from 0 to 1, when PC output corresponds to what DC represents.All activated binary positions in DC create a DC Number (DCN). For example DC 35 receives input in position 1 and 8, therefore it has DCN = 0001001 = 9.

Page 12: Distributed computing and memory

Activation of MU by central patchIn this example activated DCs are 3, 12, 35, 39 (yellow).

If DC is activated directly from the patch, it spreads its activation to adjoining DCs that are connected to its “0” binary places, and inhibits DCs that are connected to its “1” binary places.

Each activated DC calculates its Activation Number (AN), composed of Patch activation, and surrounding DC stimulation and inhibition numbers.

Yellow = DCs activated by the patch.Red = represent DCs activated by other DCs in “1” binary placesBlue = represent DCs inhibited by other DCs in “0” binary places.

Page 13: Distributed computing and memory

Architecture of Inner (IN) and Outer (ON) Nodes

Each Node gathers DCs facing its 8 faces, all of which have some common binary positions. There are 8 types of IN nodes, and 8 types of ON nodes (labeled from 000 to 111).Each DC connects to only one Inner Node (IN) and only one Outer Node (ON)

Blue = IN nodesYellow = ON nodes

Page 14: Distributed computing and memory

DCs connections with IN & ON nodes binary places

Each DC corresponds to a given binary number position in the node, so the node calculates its DC activation number (DCA).

Page 15: Distributed computing and memory

Recall of DC from IN/ON nodesEach node has 8 DCs associated with it, and all of them have some binary positions in common.For example, if IN 011 & ON 101 were activated, in recall that would point only to DC 57.

__ __ __ __ __ __ 32 16 8 4 2 1 1 1 1 0 0 1

Page 16: Distributed computing and memory

Node network architecture• Each node connects to 6

surrounding nodes in assigned binary directions, sends a handshake request to them, and if they are active, it inscribes 1 in the appropriate binary position, thus creating a Node Activation (NA) number.

• Node Activation (NA) number has 64 possibilities. Suppose nodes 1, 4 and 32 were active, then NA for central node would be 100101 = 37.

Page 17: Distributed computing and memory

Memory associations at node level

Total possible number of surround activations for each NA0 is 326 = 1,073,741,824. All of these states can be represented in a 3D matrix: N i,j,k = (NA0, (W1, …, W32), WT), in node’s non-volatile memory, where WT = ∑(W1, …, W32), and (W1, …, W32) represent the connections WX=(NAX+NAX=1) with surrounding nodes. For each NA0/WT there are many possible rearrangements among (W1, …, W32). Each NA0/WT pair position records ID for all (W1, …, W32) cases, acting as a hash bin on a more abstract level.

Page 18: Distributed computing and memory

Combining node’s input from surrounding nodes (ID), and the input by DCs (DCA), produce a memory matrix Mi,j = (DCA, ID), where each address represents node’s state U = ∑( DCA + ID) which can distinguish between identical IDs.

DCs form a memory matrix Di,j = (UI, UO) combining IN and ON states. Each matrix address is given an identity number (IDN).As DCs also receives Patch Position Number (PPN) identifying DC’s Patch position(s), DC combines them in a memory matrix DPi,j=(PPN, IDN), DC’s state.

An activated node can have 256 activation states, represented by DCA, formed from 8 binary positions.

Page 19: Distributed computing and memory

One MU contains all 8 IN nodes, as shown on the right.Node uses NA to search its memory for matches with greatest strength, and sends NN queries to active neighboring nodes.Node checks positive feedback NN sum against a threshold, fixing the bonds if it exceeds it, otherwise node takes a new memory search, and sends a new NN query. If there are no more queries, new NN is calculated between all active nodes. Positive feedback increases memory strength, and negative feedback decreases it.

Page 20: Distributed computing and memory

ON node architecture in 3D

27 ONs create an active “cloud” surrounding IN network. Inhibited ONs initiate path finding through non-inhibited ONs in order to “stitch together” separate areas of activation.

Page 21: Distributed computing and memory

ONs are always active, starting with the activation state of HX = 63 (111111) which is inhibited by active DCs (changing 1s to 0s).The inhibition of ONs is not completely done by one MU, because their activation also depends on the surrounding MUs, as is shown in blue ONs.

Page 22: Distributed computing and memory

LearningAs an example, a random patch, creates IN and ON networks that are given a Network Number (NN), where each active node memorizes the same NN.

PatchIN-network ON-network

Page 23: Distributed computing and memory

Similar pattern reinforces IN and ON networks. Each active node checks the NNs in surrounding nodes, and ascribes to the majority NNs, with lesser strength if it did not belong to the earlier network, and increased strength if it did belong to previous network.

Similar Patch

(6/42 = 14% difference)

Similar IN-network

(0/5 WX, 0% difference)

Similar ON-network

(15/29 WX, 52% difference)

Page 24: Distributed computing and memory

Different pattern creates different networksThese networks get different NNs, because the surrounding active nodes may not have the majority NNs that exceed the threshold.

Different pattern

(16/42 = 38% difference)

Different IN-network Different ON-network

(5/5 WX, 100% difference) (20/29 Ws, 69% difference)

Page 25: Distributed computing and memory

Transformation of networks into a 2D Node Input- Output Layer (NIOL) pattern for other modules

Only 6/8 ON nodes surround each IN node in NIOL. The missing nodes can be logically derived from those ON nodes. The illustration represents a 2D summary for one patch.Six other 2D patch summaries can fit around it, and create an abstract analyzed pattern representing the original input.

Page 26: Distributed computing and memory

NIOL patch construction from the output of the original module

Image on the left shows how the transformed 2D NIOL is changed into Patch PCs for new module when threshold is T = 2. Image on the right shows what decimal equivalent outputs of PCs are in that new Patch.

Page 27: Distributed computing and memory

Using “focus input” for recall of the original patch

Retracing the steps from NIOL to Input Layer in this example, creates the output shown in the right image.

Original patch Reconstructed patch from “focus”. The difference is in binary 8 PC only. (1/7 or 14% difference).

Similar patch

Two patches used for formation of IN & ON memory networks

Page 28: Distributed computing and memory

ConclusionEYEYE system uses parallel, distributed, asynchronous calculations, that are self-referencing and self-adjusting.A stable state is achieved in a manner similar to chaotic attractors, where the output is a double 3D network of nodes.EYEYE system does not impose restrictions on the input universe and is self-contained.EYEYE system can scale up to a complex system that can serve for general artificial intelligence and autonomous learning.

www.GoFundMe.com/Brain-Module