Top Banner
A POWERPOINT PRESENTATION ON ARTIFICIAL INTELLIGENCE Submitted by:- ABHINAV KUMAR 4 th Year EIC
32
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Abhinav

APOWERPOINT PRESENTATION

ONARTIFICIAL INTELLIGENCE

Submitted by:-

ABHINAV KUMAR

4th Year EIC

Page 2: Abhinav

CONTENTS

• ARTIFICIAL INTELLIGENCE• ARTIFICIAL NEURAL NETWORK IN MEDICAL

FIELD• LEARNING ALGORITHMS• CHARACTERISTICS OF AI & ANN

Page 3: Abhinav

ARTIFICIAL INTELLIGENCEThe Term A.I. belongs to a Fifth Generation Computer System

In which the System works in the same manner as human-being.

In another sense,we can term it

As the study and design ofIntelligent Agents.In simple words,

A.I. is theScience and EngineeringOf makingIntelligent Machines.

Page 4: Abhinav

HISTORY OF AI

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.

The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their

students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and

speaking English.

Page 5: Abhinav

By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and

laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can

do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence'

will substantially be solved".

Page 6: Abhinav

In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to

fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research

in AI. The next few years, when funding for projects was hard to find, would later be called an "AI winter".

Page 7: Abhinav

The most difficult problems in knowledge representation are:

• Default Reasoning• Qualification Problem

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.

Page 8: Abhinav

Natural language processing gives machines the ability to read and understand the languages that humans speak.

Many researchers hope that a sufficiently powerful natural language processing

system would be able to acquire knowledge on its own, by reading the existing text

available over the internet. Some straightforward applications of natural

language processing include information retrieval (or text mining) and machine

translation.

Page 9: Abhinav

Emotion and social skills play two roles for an intelligent agent.

• First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.)

• Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself

• Example is Kismet, a robot with rudimentary social skills

Page 10: Abhinav

A sub-field of AI addressescreativity both theoretically

(from a philosophicalAnd psychological

perspective)And practically

(via specific implementationsof systems that

generate outputsthat can be

considered creative)

TOPIO, a robot that can play table tennis,

developed by TOSY.

Page 11: Abhinav

Artificial neurons

Neurons work by processing information. They receive and provide information in form of spikes.

The McCullogh-Pitts model

Inputs

Outputw2

w1

w3

wn

wn-1

..

.

x1

x2

x3

xn-1

xn

y)(;

1

zHyxwzn

iii

Page 12: Abhinav

Artificial neurons

The McCullogh-Pitts model:

• spikes are interpreted as spike rates;

• synaptic strength are translated as synaptic weights;

• excitation means positive product between the incoming spike rate and the corresponding synaptic weight;

• inhibition means negative product between the incoming spike rate and the corresponding synaptic weight;

Page 13: Abhinav

Summary

• Artificial neural networks are inspired by the learning processes that take place in biological systems.

• Artificial neurons and neural networks try to imitate the working mechanisms of their biological counterparts.

• Learning can be perceived as an optimisation process.

• Biological neural learning happens by the modification of the synaptic strength. Artificial neural networks learn in the same way.

• The synapse strength modification rules for artificial neural networks can be derived by applying mathematical optimisation methods.

Page 14: Abhinav

Summary

• Learning tasks of artificial neural networks can be reformulated as function approximation tasks.

• Neural networks can be considered as nonlinear function approximating tools (i.e., linear combinations of nonlinear basis functions), where the parameters of the networks should be found by applying optimisation methods.

• The optimisation is done with respect to the approximation error measure.

• In general it is enough to have a single hidden layer neural network to learn the approximation of a nonlinear function. In such cases general optimisation can be applied to find the change rules for the synaptic weights.

Page 15: Abhinav

LEARNING ALGORITHMSLearning  is acquiring new, or modifying

existing, knowledge, behaviors, skills, values, or preferences and may involve synthesizing different types of information.

The ability to learn is possessed by humans, animals and some machines. Progress over time tends to follow learning curves.

Human learning may occur as part of education, personal development, schooling, or training.

There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating

that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.

Page 16: Abhinav

Supervised Learning

• It is based on a labeled training set.

• The class of each piece of data in training set is known.

• Class labels are pre-determined and provided in the training phase.

A

BA

BA

B

Class

Class

Class

Class

Class

Class

Page 17: Abhinav

Unsupervised Learning

• Input : set of patterns P, from n-dimensional space S, but little/no information about their classification, evaluation, interesting features, etc. It must learn these by itself! : )

• Tasks:– Clustering - Group patterns based on similarity – Vector Quantization - Fully divide up S into a small set of

regions (defined by codebook vectors) that also helps cluster P.

– Feature Extraction - Reduce dimensionality of S by removing unimportant features (i.e. those that do not help in clustering P)

Page 18: Abhinav

Reinforcement Learning• Mainly based on

“Reinforcement Learning – An Introduction” by Richard Sutton and Andrew Barto

Page 19: Abhinav

Learning from Experience Plays a Role in …

Psychology

Artificial Intelligence

Control Theory andOperations Research

Artificial Neural Networks

ReinforcementLearning (RL)

Neuroscience

Page 20: Abhinav

Multilayer PerceptronsArchitecture

Inputlayer

Outputlayer

Hidden Layers

Page 21: Abhinav

21

Backpropagation Algorithm

• Two phases of computation:

– Forward pass: run the NN and compute the error for each neuron of the output layer.

– Backward pass: start at the output layer, and pass the errors backwards through the network, layer by layer, by recursively computing the local gradient of each neuron.

Page 22: Abhinav

Delta Rule

• Functions more like nonlinear parameter fitting - the goal is to exactly reproduce the output, Y, by incremental methods.

• Thus, weights will not grow without bound unless learning rate is too high.

• Learning rate is determined by modeler - it constrains the size

of the weight changes.

Page 23: Abhinav

CHARACTERISTICS OF AI &ANN

In the field of robotic minimally invasive surgery, it is apparent that advances in technology have conferred increased precision during—and the decreased risk of complications after—a wide range of surgical procedures. Patients who are operated on by robots controlled by surgeons enjoy shorter recovery times and fewer visible post-operative scars than those subject to traditional open-surgical procedures.

With advances in robotic technology in the operating room, though, the surgeon's hand is no longer the driving force behind the scalpel.

Page 24: Abhinav

THE POTENTIAL AND THE PROCESS

• In recent years, a great deal of effort has been devoted to development of methodologies for cancer therapy.

• Among them; Heavy Iron therapy is highly under attention.

• We fixed an ultrasonic diagnosis device on the top of a robot arm and tracked the cancer which was moved on a monitor with respiration.

• By using the neural network, it became possible to track the cancer automatically.

Page 25: Abhinav

THE BASIC WORKING CONDITION FOR CANCER DIAGNOSIS

• Neural networks provide a unique computing archi tecture whose potential has only began to be tapped, Used to address problems that are intractable or cum bersome with traditional methods, these new comput ing architectures, inspired by the structure of the brain.

• Artificial neural networks take their name from the networks of nerve cells in the brain.

Page 26: Abhinav

• In a neural network each neuron is linked to many of its neighbors (typically hundreds or thousands) so that there are many more interconnects than neurons.

• The power of the neural network lies in the tremendous number of interconnections.

• The neuron performs a weighted sum on the inputs and uses a nonlinear threshold function to compute its output.

• The calculated result is sent along the output connections to the target cell.

Page 27: Abhinav

RESPIRATION

• The respiration information which is fed into the de-signed neural network as input is a waveform obtained by a strain gage in of line.

• We fixed the strain gauge around the abdomen and sensor expanded and contracting by abdominal movement, the changes is taken as respiration information.

• Then, the informa tion of respiration (both amplitude and differential at period ) are fed into the neural network.

Page 28: Abhinav

ROBOT ARM

• The robot arm used in this study is a multi-joint manipulator with 6 degree-of-freedom.

• Flappers which show the coordinates of diagnosis device position are x, y, z and the rotation.

• In simulation, since we controlled it in Y axis direction, the variable parameter is y' only and another parameters (in this case X,Z axis) are consistant.

Page 29: Abhinav

Robot Arm

Page 30: Abhinav

SIMULATION

• The displacement prediction network and inverse kinematics networks were obtained by using a neural network simulator developed by University of Toronto called "Xenon".

• We combined two networks (combination network), and it gave its output to the robot arm input.

• Using the trained network and off line respiration data, we controlled the robot arm in one din-tension automatically.

Page 31: Abhinav
Page 32: Abhinav