Top Banner
1 DARPA SyNAPSE Program FOR PUBLIC RELEASE Correct at time of print – 01/11/2013 Modified – 10/13/2014
33

SyNAPSE Programme - Artificial Neural Networks

Feb 08, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: SyNAPSE Programme - Artificial Neural Networks

1

DARPA SyNAPSE Program

FOR PUBLIC RELEASE

Correct at time of print – 01/11/2013

Modified – 10/13/2014

Page 2: SyNAPSE Programme - Artificial Neural Networks

2

Table of Contents (hide)

Cover page 1 Table of Contents page 2 Executive Summary page 3 Latest News page 4-5 Background page 6-7 Project phases page 8-10 Results / progress page 11-24

o Cat-scale simulation page 11-13 o Criticism of the cat brain simulation claim page 14 o Neurosynaptic core page 15-16 o Implementation of olfactory bulb glomerular-layer computations page 17 o IBM Brain Wall page 18 o Memristor chip page 19-20 o Neuromorphic architecture page 21 o TrueNorth and Compass page 22-23 o Multi-core neurosynaptic chip page 24

Videos page 25 Collaborators page 26-27 Funding page 28 Timeline page 29 People involved page 30-32 Science papers page 33 Weblinks page 33

Page 3: SyNAPSE Programme - Artificial Neural Networks

3

Executive Summary

The Brain Wall - a neural network visualisation tool built by SyNAPSE researchers at IBM

SyNAPSE is a DARPA-funded program to develop electronic neuromorphic machine technology that scales to biological levels. More simply stated, it is an attempt to build a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose

intelligence matches that of mice and cats.

SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. It started in 2008 and as of January 2013 has received $102.6 million in funding. It is scheduled to run until around 2016. The project is primarily contracted to IBM and HRL who in turn subcontract parts of the

research to various US universities.

The ultimate aim is to build an electronic microprocessor system that matches a mammalian brain in function, size, and power consumption. It should recreate 10 billion neurons, 100 trillion synapses, consume one kilowatt (same as a small electric heater), and occupy less than two liters of space.

Page 4: SyNAPSE Programme - Artificial Neural Networks

4

Latest news

As of January 2013 the program is currently progressing through phase 2, the third of five phases. This involves, among other things, designing a multi-chip system capable of emulating 1 million neurons and 1 billion synapses. The initial program requirements were that no phase should last longer than 18 months. This means the million-neuron design should be complete by February 2013. Construction of the system will come in phase 3, to be completed around August

2014.

Early 2013 - Expected announcement of multi-core neurosynaptic processors. Each processor containing 1 million neurons (256 neurons per core, ~4,000 cores per processor).

November 15, 2012 - Paper published describing the TrueNorth architecture and Compass used to simulate 530 billion neurons.

June 6, 2012 - Paper published describing how the SyNAPSE project's 256-neuron neuromorphic chip was used to capture essential functional properties of olfactory bulb glomerular-layer computations.

May 30, 2012 - New video tour of IBM's Brain Lab given by project manager Dharmendra Modha.

Page 5: SyNAPSE Programme - Artificial Neural Networks

5

May 15, 2012 - Paper published describing a minimal framework neuromorphic architecture for object recognition and motion anticipation. The architecture contains 766 spiking artificial neurons which could be deployed on IBM's neurosynaptic core. Developed by the DARPA SyNAPSE researchers at the University of Wisconsin-Madison. See also the summary below.

Mar 23, 2012 - HRL Labs demonstrate the first functioning memristor array stacked on a conventional CMOS semiconductor circuit. See the press release and the full science paper. See also the summary below.

Aug 31, 2011 - A third tranche of funding, worth $21m, was received for phase 2. Originally one of the main aims of this phase was to demonstrate a simulated neural system of one million neurons performing at mouse level in the virtual environment.

Aug 18, 2011 - A neuromorphic chip was unveiled containing 256 neurons and ~250,000 synapses. It could learn to recognise handwritten digits and play a game of pong.

Nov 14, 2009 - Cat-scale neocortex successfully simulated at ~643 x slower than real time using a Blue Gene/P supercomputer at the Lawrence Livermore National Laboratory. Science paper: The cat is out of the bag. See also the summary below.

Page 6: SyNAPSE Programme - Artificial Neural Networks

6

Background

SyNAPSE program logo

DARPA logo

The following text is taken from the Broad Agency Announcement (BAA) published by DARPA in April 2008 (see the original document):

Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of

limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.

The vision for the DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring human-derived algorithms to both describe and process

information from their environment. In contrast, biological neural systems autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity,

neuromorphic electronic machines would be preferable in a host of applications - but useful and practical implementations do not yet exist.

Page 7: SyNAPSE Programme - Artificial Neural Networks

7

The key to achieving the vision of the SyNAPSE program will be an unprecedented multidisciplinary approach that can coordinate aggressive technology development activities in the following areas: 1) hardware; 2) architecture; 3) simulation; and 4) environment.

Hardware - implementation will likely include CMOS devices, novel synaptic components, and combinations of hard-wired and programmable/virtual connectivity. These will support critical information processing techniques observed in biological systems, such as spike encoding and spike timing

dependent plasticity.

Architecture - will support critical structures and functions observed in biological systems such as connectivity, hierarchical organization, core component circuitry, competitive self-organization, and modulatory/reinforcement systems. As in biological systems, processing will necessarily be

maximally distributed, nonlinear, and inherently noise- and defect-tolerant.

Simulation - large-scale digital simulations of circuits and systems will be used to prove component and whole system functionality and to inform overall system development in advance of neuromorphic hardware implementation.

Environment - evolving, virtual platforms for the training, evaluation and benchmarking of intelligent machines in various aspects of perception, cognition, and response.

Realizing this ambitious goal will require the collaboration of numerous technical disciplines such as computational neuroscience, artificial neural networks, large-scale computation, neuromorphic VLSI, information science, cognitive science, materials science, unconventional nanometer-scale electronics, and

CMOS design and fabrication.

Page 8: SyNAPSE Programme - Artificial Neural Networks

8

Project phases

No phase should last more than 18 months. The following targets were specified in 2008 before the project started. The targets for each phase may have changed in the meantime depending on the outcome of completed phases.

Phase 0

Feasibility study for nine months. Funding of at least $10.8m. Started November 2008, completed ~August 2009.

Hardware: Demonstrate an electronic synaptic component exhibiting spike-timing-dependent plasticity (STDP) with:

Synaptic density scalable to >1010/cm2 Operating speed >10 Hz

Consumes < 10-12 Joules per synaptic operation at scale Dynamic range of synaptic conductance > 10

Synaptic conductance increase >1%/pulse for presynaptic spike applied somewhere <80-1 msec before a postsynaptic spike Synaptic conductance decrease >1%/pulse for presynaptic spike applied somewhere within 1-80 msec after postsynaptic spike

0%-0.02% conductance decrease if presynaptic spike applied > 100 msec before or after postsynaptic spike Performance maintained over 3 x 108 synaptic operations

Architecture: Specify and validate by simulation the function of core microcircuit assemblies using measured synaptic properties. The chosen microcircuits must support the larger system architecture and demonstrate spike time encoding, spike time dependent plasticity, and competitive neural dynamics.

Phase 1

In August 2009 IBM and HRL received $16.1m and $10.7m respectively to carry out phase 1. The phase started ~November 2009 and completed ~July 2011.

Demonstrate all core micro-circuit functions in hardware Specify a chip fabrication process supporting the architecture with > 1010 synapse/cm2, > 106 neurons/cm2

Page 9: SyNAPSE Programme - Artificial Neural Networks

9

Demonstrate a neuromorphic design methodology that can specify all the components, subsystems, and connectivity of a complete system Specify a corresponding electronic implementation of the neuromorphic design methodology supporting > 1014 synapses, > 1010 neurons, mammalian

connectivity, < 1 kW, < 2 L Simulation: Demonstrate dynamic neural activity, network stability, synaptic plasticity, and self-organization in response to sensory stimulation and

system-level modulation/reinforcement in a system of ~106 neurons Environment: Demonstrate virtual Visual Perception, Decision and Planning, and Navigation Environments with a selectable range of complexity

corresponding roughly to the capabilities demonstrated across a ~104 range in brain size in small-to-medium mammalian species

Phase 2

As of January 2012 this is the current phase. Funding of $17.9m awarded to HRL in July 2011, and $21m awarded to IBM in August 2011.

Chip fabrication of >1010 synapse/cm2, >106 neurons/cm2 Design a complete neural system of ~1010 synapses and ~106 neurons for simulation testing

Design a corresponding single chip neural system of ~1010 synapses and ~106 neurons Demonstrate a simulated neural system of ~106 neurons performing at mouse level in the virtual environment Expand the Sensory Environment to include training and evaluation of Auditory Perception and Proprioception Expand the Navigation Environment to include features stressing Competition for Resources and Survival

Demonstrate a selectable range of complexity corresponding roughly to the capabilities demonstrated across a ~106 range in brain size mammalian species

Phase 3

Estimated to begin between late 2012 and late 2013.

Fabricate a single-chip neural system of ~106 neurons (1 million) into a fully functioning assembly. Show mouse-level performance in a virtual environment.

Design neural system of ~1012 synapses (1 trillion) and ~108 neurons (100 million) for simulation testing Design a corresponding single-chip neural system of ~1012 synapses (1 trillion) and ~108 neurons (100 million)

Demonstrate a simulated neural system of ~108 neurons performing at cat level Add touch to the sensory environment.

Add a symbolic environment.

Page 10: SyNAPSE Programme - Artificial Neural Networks

10

Phase 4

The final deliverable metric is the fabrication of a multi-chip neural system of 108 neurons (100 million) and install this in a robot that performs at cat level. Estimated to begin between late 2013 and late 2015. Estimated completion date, late 2014 to late 2017.

Page 12: SyNAPSE Programme - Artificial Neural Networks

12

Cortical model used in simulations

IBM developed a massively parallel cortical simulator called C2. It ran on the Blue Gene/P supercomputer named Dawn (pictured right) at Lawrence Livermore National Laboratory. The supercomputer had 147,456 CPUs and 144 terabytes of main memory. The largest cortical simulation consisted of 1.6 billion neurons and 8.87 trillion synapses. This matches the scale of a cat cortex and 4.5% the scale of a human cortex. The simulation ran at 643 times slower than real time.

The simulations incorporated single-compartment spiking neurons, STDP, and axonal delays. The simulation time step was 0.1 milliseconds.

The architecture and connectivity of the simulated network was biologically inspired (see image right). It included the visual cortex, attendant sections of the thalamus, and the reticular nucleus. Regions of the simulated cortex were constructed from thalamocortical modules. Each module had 10,000 cortical neurons, 334 thalamic neurons, and 130 reticular nucleus neurons. Within each module, cortical neurons were further subdivided into four layers (real mammalian brains

have six layers). The ratio of excitatory to inhibitory neurons was also modelled on experimentally observed data. The largest model had 278 x 278 modules making a total of 1.6 billion neurons.

Page 13: SyNAPSE Programme - Artificial Neural Networks

13

The SpikeStream was a framework to supply sensory stimulus information encoded in spikes. The spikes were encoded to represent geometric visual objects and auditory utterances of the alphabet.

The BrainCam was a framework that recorded the firing of all neurons and converted them to a movie for convenient visualisation - similar in concept to an EEG trace. A video (150 MB mpeg) is available showing how a stimulus in the shape of the letters "IBM" propagates. The speed and pattern of propagation

matches observations made in animals. The simulations also reproduced alpha waves (8 to 12 Hz) and gamma waves (>30 Hz) as often seen in the mammalian cortex.

Future plans are to enrich the model with long-distance connectivity between cortical areas. Also, to increase resolution by reducing the size of each module from 10k neurons down to 100 neurons - to pair with latest experimental results. It is predicted that a 100% human scale, real-time simulation would require 4

petabytes of memory and a supercomputer running at >1 exaflops. This should be achieved by the year 2018 if general advances in supercomputer technology continue at the same rate as they have in recent decades.

Update: The C2 was superseded in 2012 by Compass, see details below.

Published paper: Cortical simulations with 109 neurons, 1013 synapses - November 2009

Page 14: SyNAPSE Programme - Artificial Neural Networks

14

Criticism of the cat brain simulation claim

Shortly after IBM announced their cat-scale brain simulation, Henry Markram of the Blue Brain Project published a very strong criticism of the claim. He called it "a mega public relations stunt - a clear case of scientific deception of the public". As Markram wrote in an open letter, these simulations do not come close to the

complexity of an ant brain, let alone that of a cat brain.

Markram's first argument was that although the number of simulated neurons roughly equals that of a cat brain, the model used for each individual neuron was trivially simple. The neurons were modelled as single compartment "dots" completely lacking in biological realism. Genuine simulation of real neurons requires

solving millions of times more equations than were used by IBM. Thus, not even a millionth of a cat brain was simulated.

The second argument was that such large-scale simulations of trivial neurons have been performed a number of years previously already. Indeed, Eugene Izihkevich (now CEO of Brain Corporation), carried out a 100-billion neuron simulation back in 2005. The non-peer reviewed paper published by IBM was thus

nothing new or interesting.

The full letter from Henry Markram: IBM's claim is a hoax

Page 16: SyNAPSE Programme - Artificial Neural Networks

16

In August 2011 IBM revealed that they had built a digital neurosynaptic core. The microprocessor implements 256 leaky integrate-and-fire neurons in CMOS hardware. The neurons are arranged in a 16x16 array. Each neuron is connected to others by 1,024 synapses, making a total of 262,144 synapses per core.

A 45 nm SOI manufacturing process was used. This was state of the art in retail laptop computers in 2008. The newest laptops ship with 22 nm processors as of August 2012. The entire core has 3.8 million transistors and fits inside 4.2 mm2. Each neuron occupies 35 μm x 95 μm Compare this to a real neuron body which

is about 4 to 100 μm in diameter.

The core was mounted on a custom-built printed circuit board and connected to a personal computer via USB. This way it could be interfaced to various virtual and real environments. The core learned to recognise handwritten digits (see video) and could also play a game of pong (see video).

The core was completely deterministic. This is unlike previous analog neuromorphic hardware which is sensitive to construction variations and ambient temperatures. The chip had a ~1 kHz clock, corresponding to ~1 ms biological time step. Internally a ~1 mHz clock was also used for other processing.

Unlike the traditional Von Neumann computer architecture, the computation and memory units of this chip are tightly integrated. This speeds up highly parallel computation as well as reducing the power required. It is theoretically possible to build a large on-chip network of these cores, thus creating an ultra-low power

"neural fabric" for a wide array of real-time applications. The ultimate aim is to build a human-scale system with 100 trillion synapses.

Papers:

A digital neurosynaptic core using embedded crossbar memory A CMOS neuromorphic chip for learning in networks of spiking neurons

Page 17: SyNAPSE Programme - Artificial Neural Networks

17

Implementation of olfactory bulb glomerular-layer computations

In June 2012 the SyNAPSE team presented a system that used the above described neuromorphic chip to capture the essential functional properties of the glomerular layer of the mammalian olfactory bulb. The neural circuits configured in the chip reflected connections among mitral cells, periglomerular cells,

external tufted cells, and superficial short-axon cells within the olfactory bulb.

The circuits, consuming only 45 pJ of active power per spike with a power supply of 0.85 V, could be used as the first stage of processing in low-power artificial chemical sensing devices.

Paper: Implementation of olfactory bulb glomerular-layer computations in a digital neurosynaptic core - June 2012

Page 18: SyNAPSE Programme - Artificial Neural Networks

18

The IBM Brain Wall

The IBM Brain Wall

The so-called "brain wall", pictured here, is a visualisation tool built by IBM at their Almaden research center in California. It allows researchers to see an overview of neuron activation states in a large-scale neural network. Patterns of neural activity can be observed as they move across the network.

The 4x4 array of flat-screen monitors can display 262,144 neurons simultaneously. Each neuron is represented by one grey pixel. Larger networks might be visualised in future by grouping multiple neurons per pixel. The tool can be used to visualise supercomputer simulations as well as activity within a

neurosynaptic core.

See a timelapse video of the brain wall construction.

Page 19: SyNAPSE Programme - Artificial Neural Networks

19

Memristor chip

Memristor crossbar array built by HRL

HRL Labs announced in December 2011 that they have built a memristor array integrated on top of a CMOS chip. This was the first ever functioning demonstration of such a memristor array.

Due to the high circuit density and low power requirements, memristor technology is considered important for the continuation of Moore's Law. The HRL chip has a multi-bit fully-addressable memory storage capability with a density of up to 30 Gbits/cm2. Such density is unprecedented in microelectronics.

The simultaneous memory storage and logic processing capability of memristors makes them very suitable for neuromorphic computing. The memory and logic units are one and the same, much like the neural circuits of the brain.

HRL's hybrid crossbar/CMOS system can reliably store 1,600 pixel images using a new programming scheme. Ultimately the team plans to scale the chip to support emulation of millions of neurons and billions of synapses. The work to-date was jointly funded by the SyNAPSE program and the National Science

Foundation (NSF).

Page 20: SyNAPSE Programme - Artificial Neural Networks

20

In the future it is possible that this memristor technology can be used to implement variants of the neurosynaptic core described above. By using memristors, these cores could be reduced in size and energy consumption, thus making it more practical to build very large arrays of cores with sufficient numbers of

neurons to match the human brain.

Press release: Artificial synapses for machines that mimic biological brains Research paper: Hybrid memristor crossbar-array/CMOS system

Page 21: SyNAPSE Programme - Artificial Neural Networks

21

Neuromorphic architecture

Minimal framework NN architecture

This neuromorphic architecture (pictured right) contains 766 spiking artificial neurons arranged in layers much like the heirarchy found in the human brain. Although it uses simple leaky integrate-and-fire (LIF) neurons and simple binary synapses, it is nevertheless capable of robust visual object recognition, motion detection, attention towards important objects, and motor control outputs. This has been proven by testing the network in simulation on a standard computer.

The network utilizes burst-STDP and synaptic homeostatic renormalization - two relatively new ideas in the field of spiking neural networks.

The architecture has been designed with a view to deploying it on the digital neurosynaptic cores described above. IBM are currently (as of 2012) working on inter-core communication to build a large on-chip network of these cores. Once this hardware is available it can be used to deploy the neural network

architecture described here. The 766-neuron circuit is only a "minimum framework" prototype. It is expected to be scaled up to thousands or hundreds-of-thousands of neurons as the hardware becomes available.

Paper: A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP - May 2012

Page 22: SyNAPSE Programme - Artificial Neural Networks

22

TrueNorth and Compass

Neural pathways of a macaque monkey brain simulated in the TrueNorth architecture

TrueNorth is a novel modular, scalable, non-von Neumann, ultra-low power, cognitive computing architecture being developed by IBM as part of the SyNAPSE program. It consists of a scalable network of neurosynaptic cores, with each core containing neurons, dendrites, synapses, and axons.

Compass, also developed by IBM, is software that simulates the TrueNorth architecture. It enables testing of the architecture on a mainstream supercomputer before being built directly in specialised neuromorphic hardware. Besides being a multi-threaded, massively-parallel functional simulator, Compass is also a

parallel compiler that can map a network of long-distance neural pathways in the macaque monkey brain to TrueNorth.

IBM and LBNL ran Compass on 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer. At the time Sequoia was the world's most powerful supercomputer (TOP500 ranking). The 96 racks comprised 1,572,864 processor cores and 1.5 petabytes of memory. The system was able to simulate a TrueNorth architecture to the scale of 2.084 billion neurosynaptic cores containing 53x1010 neurons and 1.37x1014 synapses. The neurons had an

average spiking rate of 8.1 Hz, although they ran 1,542x slower than real time. The system demonstrated near-perfect weak scaling.

Page 23: SyNAPSE Programme - Artificial Neural Networks

23

By comparison, the ultimate vision of the DARPA SyNAPSE program is to build a cognitive computing architecture with 1010 neurson and 1014 synapses. This approximates the number of neurons and synapses that are estimated to be present in the human brain.

Note that although the number of neurons and synapses in the TrueNorth model match those of a human brain, the model is not a biologically realistic simulation of the brain. Rather, IBM has simulated a novel modular, scalable, non-von Neumann, ultra-low power, cognitive computing architecture to a scale

that is inspired by the number of synapses in the brain. Computation ("neurons"), memory ("synapses"), communication ("axons", "dendrites") are mathematically abstracted away from biological detail towards engineering goals of maximizing function (utility, applications), minimizing cost (power, area,

delay), and minimizing design complexity of hardware implementation.

Compass has been used to demonstrate numerous applications of the TrueNorth architecture, such as optic flow, attention mechanisms, image and audio classification, multi-modal image audio classification, character recognition, robotic navigation, and spatio-temporal feature extraction. These applications will be

published in the coming months.

Papers:

Compass: A scalable simulator for an architecture for Cognitive Computing - November 2012 1014 simulated synapses - November 2012

Page 24: SyNAPSE Programme - Artificial Neural Networks

24

Multi-core neurosynaptic chip

IBM and Cornell University's neuromorphic computing lab are currently (as of January 2013) working on the second generation of neurosynaptic processors. The neurosynaptic cores, like the first generation, will emulate 256 neurons each. The inter-core communication, however, has been developed and the new

processors are expected to contain around 4,000 cores each. This will make for a total of around 1 million neurons per processor. These new chips are expected to be announced in early 2013, possibly in February or March.

Page 25: SyNAPSE Programme - Artificial Neural Networks

25

Videos

IBM lab tour by Dean Takahashi of Venture Beat, September 2011:

More videos:

http://www.youtube.com/watch?v=oKvatA-ec4k&feature=player_embedded

Dharmendra Modha talks about IBM's Brain Lab - May 2012

IBM SyNAPSE, overview - August 2011 IBM SyNAPSE, software - August 2011 IBM SyNAPSE, brain vs. computer - April 2011 IBM SyNAPSE, hardware - April 2011 IBM SyNAPSE, circuit architecture - April 2011

Brain Wall construction - June 2011 Dharmendra Modha interview with Robert Scoble - March 2009

Page 26: SyNAPSE Programme - Artificial Neural Networks

26

Collaborators

The following organisations are collaborating in the DARPA SyNAPSE program. The main two contractors are IBM and HRL. They, in turn, sub-contract out to various universities and companies.

DARPA - program managed by Gill Pratt

IBM Research - Cognitive Computing group led by Dharmendra Modha o Columbia University Medical Center - Theoretical neuroscience research, development of neural network models, led by Stefano Fusi

o Cornell University - Asynchronous VLSI circuit design, the neurosynaptic core, led by Rajit Manohar o University of California, Merced - environment research, led by Christopher Kello

o University of Wisconsin-Madison - Simulation, theory of consciousness, computer models, led by Giulio Tononi

HRL Laboratories - Memristor-based processor development led by Narayan Srinivasa o Boston University: Stephen Grossberg, Gail Carpenter, Yongqiang Cao, Praveen Pilly

o George Mason University: Giorgio Ascoli, Alexei Samsonovich o Portland State University: Christof Teuscher

o Set Corporation: Chris Long o Stanford University: Mark Schnitzer

o The Neurosciences Institute: Gerald Edelman, Einar Gall, Jason Fleischer o University of California-Irvine: Jeff Krichmar

o University of Michigan: Wei Lu

Page 27: SyNAPSE Programme - Artificial Neural Networks

27

Former collaborators:

HP Labs - a participant in phase 0 of the program but was dropped for subsequent phases. Research continues into memristor development, plus the Cog Ex Machina intelligent systems project which is led by Greg Snider.

Neuromorphics Lab at Boston University - led by Massimiliano Versace. Was sub-contracted by HP during phase 0. Continues to receive funding from HP but independent of any DARPA support. The project is called MoNETA - an artificial whole brain system.

Page 28: SyNAPSE Programme - Artificial Neural Networks

28

Funding

All funding for the SyNAPSE program comes from DARPA. Total funding per fiscal year (FY) is as follows. Note that US government fiscal years begin on October 1 of the previous year. So FY 2013 runs from October 1, 2012 to September 30, 2013. The budgets are published in advance each year in February, so

the FY 2014 budget is due to be published in February 2013.

FY 2008 - $0 (project started October 2008, i.e. start of FY 2009) FY 2009 - $3,000,000 (PDF, page 24) FY 2010 - $17,025,000 (PDF, page 19) FY 2011 - $27,608,000 (PDF, page 15) FY 2012 - $31,000,000 (PDF, page 4) FY 2013 - $24,000,000 (PDF, page 4)

Total: $102,633,000

From the above funds, awards were made to IBM and HRL as follows:

IBM & collaborators HRL & collaborators Total (IBM + HRL)

November 2008 (phase 0) $4,879,333 $5.9 million $10.8 million

August 2009 (phase 1) $16.1 million $10.7 million $26.8 million

August 2011 (phase 2) ~$21 million $17.9 million $38.9 million

Total $42 million $34.5 million $76.5 million

An award was also made to HP Labs for phase 0 of the project, but the amount is not known.

Page 29: SyNAPSE Programme - Artificial Neural Networks

29

Timeline

2007 Apr - Todd Hylton joins DARPA to found the project

2008

Apr - DARPA publishes a solicitation for applications May - Due date for initial proposals Oct - Winning contractors announced Nov - Phase 0 start

2009

Sep - Phase 1 start Nov - Announcement of cat-scale brain simulation

2010

2011

Aug - Announcement of neuromorphic chip implementation Sep - Phase 2 start Dec - Announcement of first memristor chip

2012

Feb - Todd Hylton leaves DARPA, Gill Pratt takes over as program manager May - Neuromorphic architecture design published Nov - TrueNorth/Compass simulation of 530 billion neurons announced

2013

Feb - Expected announcement of multi-core neurosynaptic chips (~1 million neurons per chip) Mar - Phase 3 to begin (estimated date)

2014 Oct - Phase 4 to begin (estimated date)

2015

2016 Program end

Page 30: SyNAPSE Programme - Artificial Neural Networks

30

People involved

Andrew Cassidy Circuits, IBM Almaden

Arnon Amir Simulation, IBM Almaden

Ashutosh Saxena Environment, Cornell University

Benjamin Parker Circuits, IBM Watson

Bernard Brezzo Circuits, IBM Watson

Charles Alpert Circuits, IBM Austin

Christopher Kello Environment, University of California - Merced

Dan Friedman Circuits, IBM Watson

Daniel Ben-Dayan Rubin Simulation, IBM Almaden

David Woodruff Simulation, IBM Almaden

Davis Barch Simulation, IBM Almaden

Dharmendra Modha IBM Almaden

Fadi Gebara Circuits, IBM Austin

Gill Pratt DARPA, program manager

Giulio Tononi Simulation, University of Wisconsin-Madison

Ivan Vo Circuits, IBM Austin

Jae-Sun Seo Circuits, IBM Watson

Jason Fleischer Neuroscience research, NSI

Page 31: SyNAPSE Programme - Artificial Neural Networks

31

John V. Arthur Circuits, IBM Almaden

Jose Cruz-Albrecht HRL researcher

Jose Tierno Circuits, IBM Almaden

Jun Sawada Circuits, IBM Austin

Kavita Prasad Circuits, IBM Almaden

Ken Clarkson Simulation, IBM Almaden

Leland Chang Circuits, IBM Watson

Mark Ferriss Circuits, IBM Watson

Michael Beakes Circuits, IBM Watson

Mohit Kapur Circuits, IBM Watson

Narayan Srinivasa HRL principle researcher, team leader

Pallab Datta Simulation, IBM Almaden

Paul Merolla Circuits, IBM Almaden

Paul Maglio Environment, IBM Almaden

Raghavendra Singh Simulation, IBM India

Rajit Manohar Circuits, Cornell University

Robert Montoye Circuits, IBM Watson

Rodrigo Alvarez-Icaza Circuits, IBM Almaden

Page 32: SyNAPSE Programme - Artificial Neural Networks

32

Sameh Asaad Circuits, IBM Watson

Scott Hall Circuits, IBM Almaden

Seongwon Kim Circuits, IBM Watson

Shyamal Chandra Environment, IBM Almaden

Stefano Carpin Environment, University of California - Merced

Stefano Fusi Simulation, Columbia University

Steven Esser Simulation, IBM Almaden

Ted Wong Simulation, IBM Almaden

Tom Zimmerman Environment, IBM Almaden

Tuyet Nguyen Circuits, IBM Austin

Vitaly Feldman Simulation, IBM Almaden

Yong Liu Circuits, IBM Watson

SyNAPSE team, November 2008

Page 33: SyNAPSE Programme - Artificial Neural Networks

33

Science papers

Compass: A scalable simulator for an architecture for Cognitive Computing and 1014 synapses - November 2012 Building block of a programmable neuromorphic substrate: A digital neurosynaptic core - June 2012 Implementation of olfactory bulb glomerular-layer computations in a digital neurosynaptic core - June 2012 A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP - May 2012 A Digital Neurosynaptic Core Using Event-Driven QDI Circuits (sound localization system) - February 2012 A Digital Neurosynaptic Core using Embedded Crossbar Memory with 45pJ per spike in 45nm - September 2011 A 45nm CMOS Neuromorphic Chip with a Scalable Architecture for Learning in Networks of Spiking Neurons - September 2011 The Cat is Out of the Bag: Cortical Simulations with 109 Neurons, 1013 Synapses - November 2009

Weblinks

darpa.mil/synapse - DARPA project homepage ibm.com/synapse - IBM homepage for the project hrl.com/cnes - HRL group homepage vlsi.cornell.edu/bio - Biologically-inspired cognitive computing project at Cornell, see also Nabil Imam's homepage

wikipedia.org/SyNAPSE - Wikipedia article

venturebeat.com/ibm-brain-chips - article and video tour of the IBM lab