Top Banner
ROBOTICS AND NEURAL NETWORKS A SEMINAR REPORT Submitted in partial fulfillment of the requirements For the degree of Bachelor of Technology By SARVESH SINGH (Roll No. U09EE537) Under the Supervision of Mrs. KHYATI MISTRY ELECTRICAL ENGINEERING DEPARTMENT SARDAR VALABHBHAI NATIONAL INSTITUTE OF TECHNOLOGY Surat-395 007, Gujarat, INDIA.
22

Robotics and Neural Networks(23nov)

Jan 12, 2016

Download

Documents

Sarvesh Singh

This topic shows the implementation abstraction of robots. How the layers of abstraction can actually be helpful in designing Robots
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Robotics and Neural Networks(23nov)

ROBOTICS AND NEURAL NETWORKS

A SEMINAR REPORT Submitted in partial fulfillment of the requirements

For the degree of Bachelor of Technology

By

SARVESH SINGH (Roll No. U09EE537)

Under the Supervision of

Mrs. KHYATI MISTRY

ELECTRICAL ENGINEERING DEPARTMENT

SARDAR VALABHBHAI NATIONAL INSTITUTE OF

TECHNOLOGY

Surat-395 007, Gujarat, INDIA.

Page 2: Robotics and Neural Networks(23nov)

i | Page

ELECTRICAL ENGINEERING DEPARTMENT

CERTIFICATE

This is to certify that the B. Tech. IV (7th Semester) SEMINAR REPORT entitled “ROBOTICS AND

NEURAL NETWORKS” presented & submitted by SARVESH SINGH, bearing Roll No.U09EE537, in

the partial fulfillment of the requirement for the award of degree B.Tech. in Electrical Engineering.

He has successfully and satisfactorily completed his Seminar Exam in all respect. We,

certify that the work is comprehensive, complete and fit for evaluation.

SEMINAR EXAMINERS:

Examiner Signature with date

Examiner 1 __________________

Examiner 2 __________________

Examiner 3 __________________

DEPARTMENT SEAL

Page 3: Robotics and Neural Networks(23nov)

ii | Page

ABSTRACT

This project gives an introduction about the robotics. It tells about why

robotics is important, where we need robotics and how we can build smart

robots.

Artificial intelligence using neural networks is also discussed in the

this report. Neural networks are special way of building algorithms that

are inspired by the structure of human brain.

Starting with the definition and application areas, working and

fundamental characteristics of robots are discussed. In working ,

fundamental sequence of steps are involved during the working of a robot

is discussed as :-

1) Perceiving its environment through sensors.

2) Thinking about the reaction.

3) Acting upon that environment through effectors

Page 4: Robotics and Neural Networks(23nov)

iii | Page

TABLE OF CONTENT

1 What are robots? 1

2 Why do we need robots? 1

3 How does a robot work? 4

3.1 Perceiving its environment through sensors 4

3.1.1 Properties of environment 5

3.1.2 Robotic sensing 6

3.1.2.1 Proprioceptors 6

3.1.2.2 Exteroceptors 7

3.2 Thinking about the reaction 9

3.2.1 Look up table 9

3.2.2 Simple reflex programs 9

3.2.3 Program that keeps track of the world 10

3.2.4 Goal based programs 10

3.2.5 Utility based programs 10

3.2.6 Neural networks 10

3.2.7 Neural network construction 12

3.2.8 A feed forward network 13

3.3 Acting upon the environment through the effectors 14

3.3.1 Effectors/Actuators 14

3.3.2 Types of actuators 14

3.3.3 Kinematics 14

3.3.4 Actuator Types 15

Page 5: Robotics and Neural Networks(23nov)

iv | Page

LIST OF FIGURES

1 A robotically assisted surgical system used for prostatectomies, cardiac valve repair and gynecologic surgical procedures

2

2 NASA robots on mars 2

3 Articulated welding robots used in a factor 3

4 Gladiator unmanned ground vehicle 3

5 Network with one layer 12

6 A feed forward network 13

7 A manipulator 14

8 A revolute joint 15

9 A prismatic joint 15

10 A spherical joint 15

Page 6: Robotics and Neural Networks(23nov)

1 | Page

ROBOTICS AND NEURAL NETWORKS

1) WHAT ARE ROBOTS?

The Robot Institute of America defines a robot as a programmable, multifunction manipulator designed to move material, parts, tools, or specific devices through variable programmed motions for the performance of a variety of tasks.

Robot is simply an active, artificial agent whose environment is the physical world.

2) WHY WE NEED ROBOTICS 4

Outer Space

Manipulative arms that are controlled by a human are used to unload the docking bay of space shuttles to launch satellites or to construct a space station.

The intelligent home

Automated systems can now monitor home security, environmental conditions and energy usage. Door and windows can be opened automatically and appliances such as lighting and air conditioning can be preprogrammed to activate. This assists occupants irrespective of their state of mobility Exploration

Robots can visit environments that are harmful to humans. An example is monitoring the environment inside a volcano or exploring our deepest oceans. NASA has used robotic probes for planetary exploration since the early sixties.

Military Robots

Airborne robot drones are used for surveillance in today's modern army. In the future automated aircraft and vehicles could be used to carry fuel and ammunition or clear minefields Farms

Automated harvesters can cut and gather crops. Robotic dairies are available allowing operators to feed and milk their cows remotely. The Car Industry

Robotic arms that are able to perform multiple tasks are used in the car manufacturing process. They perform tasks such as welding, cutting, lifting, sorting and bending. Similar applications but on a smaller scale are now being planned for the food processing industry in particular the trimming, cutting and processing of various meats such as fish, lamb, beef. Hospitals

Under development is a robotic suit that will enable nurses to lift patients without damaging their backs. Scientists in Japan have developed a power-assisted suit which will give nurses the extra muscle they need to lift their patients - and avoid back injuries.

Page 7: Robotics and Neural Networks(23nov)

2 | Page

Fig1 A robotically assisted surgical system used for prostatectomies, cardiac valve repair and gynecologic

surgical procedures

Fig2 NASA robots on mars

Page 9: Robotics and Neural Networks(23nov)

4 | Page

3) HOW DOES A ROBOT WORK??

Working of a robot can be viewed in the following three steps:-

1) Perceiving its environment through sensors.

2) Thinking about the reaction.

3) Acting upon that environment through effectors

All these three stages involves a lot of detailed study. Here we’ll be talking about each stages in

detail one by one after a brief introduction about each stage.

1) PERCEIVING ITS ENIRONMENT THROUGH SENSORS:-

Like any living body robots also need some kind of stimulus to react upon. Robots are

programmed to do certain things and are programmed to react in a certain manner

according as the programmer has programmed the robot.

Robots have sensors that receive a stimulus from the environment and reacts

according to the stimulus. Sensors acts as the input for robot. Robots think upon that input

and reacts according its thinking.

2) THINKING ABOUT THE REACTION :-

Now when the robot has received the stimulus from the sensors it has the responsibility to

act according to its knowledge provided by

the programmer. There are many ways how a robot can think and act accordingly. Here we

will give brief description about how the robot can act as a smart robot. We say a smart

robot is a robot that can learn things on his own and act. There are many ways of

programming the robot to make a robot smart.

One way to make a robot smart is to use neural network. Basically neural network

is the way of programming in which we give an opportunity for the program to learn. In this

programming style we imitate the way how a human brain works. We use the concept of

neurons and how each neurons are linked with other neurons.

3) ACTING UPON THE ENVIRONMENT THROUGH THE EFFECTORS

Robots are basically made up of joints, links, and end-effectors.

Robot has some sort of rigid body, with rigid links that can move about. Links meet each other at joints, which allow motion. For example, on a human the upper arm and forearm are links, and the shoulder and elbow are joints.

Page 10: Robotics and Neural Networks(23nov)

5 | Page

An effector is any device that affects the environment, under the control of the robot. To have an impact on the physical world, an effector must be equipped with an actuator that converts software commands into physical motion. The actuators themselves are typically electric motors or hydraulic or pneumatic cylinders.

Acting through the effectors requires the study of

1) kinematics of robot.

2) dynamics of robot.

3) force control.

4) motion planning(related with intelligence of the robot).

5) motion control.

DETAILED DESCRIPTION

3.1) PERCEIVING ITS ENVIRONMENT THROUGH SENSORS As the robot is interacting with environment and it is interacting through sensors, thus we need to understand environment and sensors

3.1.1) Properties of environments2

1)Accessible vs. inaccessible.

If an agent's sensory apparatus gives it access to the complete state of the environment, then we say that the environment is accessible to that agent. An environment is effectively accessible if the sensors detect all aspects that are relevant to the choice of action. An accessible environment is convenient because the agent need not maintain any internal state to keep track of the world.

2) Deterministic vs. nondeterministic

If the next state of the environment is completely determined by the current state and the actions selected by the agents, then we say the environment is deterministic

3) Episodic vs. nonepisodic

In an episodic environment, the agent's experience is divided into "episodes." Each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself, because subsequent episodes do not depend on what actions occur in previous episodes. Episodic environments are much simpler because the agent does not need to think ahead

4) Static vs. dynamic.

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise it is static. Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time.

Page 11: Robotics and Neural Networks(23nov)

6 | Page

5) Discrete vs. continuous

If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete. For example Chess is discrete—there are a fixed number of possible moves on each turn. Taxi driving is continuous—the speed and location of the taxi and the other vehicles sweep through a range of continuous values.

3.1.2) Robotic sensing3

Since the “action” capability is physically interacting with the environment, two types of sensors have to be used in any robotic system:

1) “proprioceptors” for the measurement of the robot’s (internal) parameters

2) “exteroceptors” for the measurement of its environmental (external, from the robot point of view) parameters.

Data from multiple sensors may be further fused into a common representational format (world model). Finally, at the perception level, the world model is analyzed to infer the system and environment state, and to assess the consequences of the robotic system’s actions.

3.1.2.1) Proprioceptors Robot consists of series of links interconnected by joints. Each joint is driven by an actuator which can change the relative position of the two links connected by that joint. Proprioceptors are sensors measuring both kinematic and dynamic parameters of the robot. The usual kinematics parameters are the joint positions, velocities, and accelerations. Dynamic parameters such as forces, torques and inertia area are also important to monitor for the proper control of the robotic manipulators.

Kinematic parameters:-

Joint position sensors: They are usually mounted on the motor shaft. Encoders are digital position transducers which are the most convenient for computer interfacing. Incremental encoders are relative-position transducers which generate a number of pulses proportional with the traveled rotation angle. These gives relative position of the arms and in case of power failure it gives bad results as it has lost the data of it relative position.

Absolute shaft encoders are attractive for joint control applications because their position is recovered immediately and they do not accumulate errors as incremental encoders may do.

Angular velocity sensors: is measured ( when not calculated by differentiating joint positions) by tachometer transducers.

A tachometer generates a DC voltage proportional to the shaft'’ rotational speed. Digital tachometers using magnetic pickup sensors are replacing traditional, DC motor-like tachometers which are too bulky for robotic applications.

Page 12: Robotics and Neural Networks(23nov)

7 | Page

Acceleration sensors : are based on Newton’s second law. They are actually measuring the

force which produces the acceleration of a known mass. Different types of acceleration transducers are stress-strain gage, piezoelectric,capacitive, inductive.

3.1.2.2) Exteroceptors Exteroceptors can be classified according to their range as follows:

- contact sensors.

- proximity (“near to”) sensors.

- “far away” sensors . -Contact sensors are used to detect the positive contact between two mating parts and/or to measure the interaction forces and torques which appear while the robot manipulator conducts part mating operations.

Force/Torque Sensors

The interaction forces and torques which appear, during mechanical assembly operations, at the robot hand level can be measured by sensors mounted on the joints or on the manipulator wrist. This solution is not too attractive since it needs a conversion of the measured joint torques to equivalent forces and torques at the hand level. The forces and torque measured by a wrist sensor can be converted quite directly at the hand level. Wrist sensors are sensitive, small, compact and not too heavy, which recommends them for force controlled robotic applications.

A wrist force/torque has a radial three or four beam mechanical structure. Two strain gages are mounted on each deflection beam. Using a differential wiring of the strain gages, the four -beam sensor produces eight signals proportional with the force components normal to the gage planes.

Tactile Sensing

Tactile sensing is defined as the continuous sensing of variable contact forces over an area within which there is a spatial resolution. Tactile sensors mounted on the fingers of the hand allow the robot to measure contact force profile and slippage, or to grope and identify object shape.

The best known of tactile sensor technologies are: conductive elastomer, strain gage, piezoelectronic, capacitive and optoelectronic. These technologies can be further grouped by their operating principles in two categories: force-sensitive and displacement-sensitive. The force-sensitive sensors (conductive elastomer, strain gage and piezoelectric) measure the contact forces, while the displacement-sensitive (optoelectronic and capacitive) sensors measure the mechanical deformation of an elastic overlay.

Tactile sensing is the result of a complex exploratory perception act with two distinct modes.First, passive sensing, which is produced by the “cutaneous” sensory network, provides information about contact force, contact geometric profile and temperature. Second, active sensing integrates the cutaneous sensory information with “kinesthetic” sensory information (the limb/joint positions and velocities).

Page 13: Robotics and Neural Networks(23nov)

8 | Page

2. Proximity Sensors

Proximity sensors detect objects which are near but without touching them. These sensors are used for near-field (object approaching or avoidance) robotic operations. Proximity sensors are classified according to their operating principle; inductive, hall effect, capacitive, ultrasonic and optical.

Inductive sensors are based on the change of inductance due to the presence of metallic objects. Hall effect sensors are based on the relation which exists between the voltage in a semiconductor material and the magnetic field across that material. Inductive and Hall effect sensors detect only the proximity of ferromagnetic objects. Capacitive sensors are potentially capable of detecting the proximity of any type of solid or liquid materials. Ultrasonic and optical sensors are based on the modification of an emitted signal by objects that are in their proximity.

3) “Far Away” Sensing

Two types of “far away” sensors are used in robotics: range sensors and vision

a) Range sensors

Range sensors measure the distance to objects in their operation area. They are used for robot navigation, obstacle avoidance or to recover the third dimension for monocular vision. Range sensors are based on one of the two principles: time-of-flight and triangulation.

Time-of-flight sensors estimate the range by measuring the time elapsed between the transmission and return of a pulse. Laser range finders and sonar are the best known sensors of this type.

Triangulation sensors measure range by detecting a given point on the object surface from two different points of view at a known distance from each other. Knowing this distance and the two view angles from the respective points to the aimed surface point, a simple geometrical operation yields the range.

b) vision

Robot vision is a complex sensing process. It involves extracting, characterizing and interpreting information from images in order to identify or describe objects in environment. A vision sensor (camera) converts the visual information to electrical signals which are then sampled and quantized by a special computer interface electronics yielding a digital image

The digital image produced by a vision sensor is a mere numerical array which has to be further processed till an explicit and meaningful description of the visualized objects finally results. Digital image processing comprises more steps: preprocessing, segmentation, description, recognition and interpretation. Preprocessing techniques usually deal with noise reduction and detail enhancement. Segmentation algorithms, like edge detection or region growing, are used to extract the objects from the scene. These objects are then described by measuring some (preferably invariant) features of interest. Recognition is an operation which classifies the objects in the feature space. Interpretation is the operation that assigns a meaning to the ensemble of recognized objects.

Page 14: Robotics and Neural Networks(23nov)

9 | Page

3.2) THINKING ABOUT THE REACTION2

Now as we have seen about the receiving of stimulus we’ll see how the robot reacts upon it.

Robot controller can have a multi-level hierarchical architrcture:3

1. Artificial intelligence level, where the program will accept a command such as, ‘Pick up the bearing ‘ and decompose it into a sequence of lower level commands based on a strategic model of the task.

2. Control mode level where the motions of the system are modelled, including the dynamic interactions between the different mechanisms, trajectories planned, and grasp points selected. From this model a control strategy is formulated, and control commands issued to the next lower level.

3. Servo system level where actuators control the mechanism parameters using feedback of internal sensory data, and paths are modified on the basis of external sensory data. Also failure detection and correction mechanisms are implemented at this level.

In this section we are basically going to talk about artificial intelligence level control.

There are plenty of ways how we can program a robot to work. Here we’ll discuss about the programming styles and talk about smart robots. Smart robots are those which has his own thinking ability.

3.2.1) LOOK UP TABLES This is the simplest way of programming. It is not suited for the case where we have a lots of data. In this method we simply form a table in which each input refers to an output. When input arrives, program logic will look for the corresponding output from the table.

This method is not suitable and its quite simple and with this method we cant think about artificial intelligence and it has a lot more demerits. Although we can use this when we need a smaller table.

e.g. The visual input from a single camera comes in at the rate of 50 megabytes per second (25 frames per second, 1000 x 1000 pixels with 8 bits of color and 8 bits of intensity information). So the lookup table for an hour would be 260x60x50M entries. That is this method is not suitable.

3.2.2) SIMPLE REFLEX PROGRAMS Such kinds of program simply uses if-else logic. It makes easy decisions based on the input. It reacts as yes or no logic. It does not have its thinking it just reacts as a small kid saying only yes or no when the kid has just learnt these words.

e.g. suppose we have a robot who is a driver i.e. driving a car. When it sees some condition like it sees red back light of the car in front then it simply applies the brake.

Page 15: Robotics and Neural Networks(23nov)

10 | Page

if car-in-front-is-braking then initiate-braking

general programming style for this case:-

function SiMPLE-REFLEX-PROGRAM(percept) returns action

static: rules, a set of condition-action rules

state <— lNTERPRET-lNPUT(percept)

rule <- RULE-MATCH(state, rules)

action <- RULE-ACTION[rule]

return action

The above general programming style can be explained as it receives interpret-input(percept) from the sensors and checks for the rules (if-else type) after passing the input from the rules we get the output i.e. now program knows what to do next which is get stored in action variable and returned to the main program.

3.2.3) PROGRAM THAT KEEP TRACK OF THE WORLD In this kind of programming style, the program checks for a lot of data simultaneously and react intelligently according to the various data received by the program. It can be said that this algorithm is like reflex agent but has a state ,collection of data, associated with it.

e.g. suppose our robot driver sees the front car is braking (red rear light) then it is not sure that it’ll apply break instead it will make intelligent decision based on other data as well like it will keep track of how far the car is in front and how much is the deceleration.

In this the program makes an intelligent decision based on various parameters input into the program. This is the basic difference from the simple reflex type.

function REFLEX-AGENT-WITH-STATE(percept) returns action

static: state, a description of the current world state

rules, a set of condition-action rules

state <— UPDATE-STATE(state, percept)

rule — RULE-MATCH(State, rules)

action — RULE-ACTION[rule]

state <- UPDATE-STATEG(state, action)

return action

It has this state associated which can be treated as a collection of data from the environment and acting by considering each and every parameter that are associated with the state.

3.2.4) GOAL BASED PROGRAMS

In such kind of program the robot knows about the goal. And depending upon the goal it takes intelligent decision. Basically the program is concerned with the goal only. It actually thinks of goal cares about the environment and acts accordingly.

Page 16: Robotics and Neural Networks(23nov)

11 | Page

e.g. suppose car is parked at a place and our robot driver has to decide where to go. It can make an intelligent decision only if it knows the goal otherwise it’ll do whatever it is programmed to do after the start. i.e. if it was programmed to take right when starts then it’ll take right. No matter whether turning right is pushing it away from the goal or towards the goal.

Now reaching the goal from a particular position still has many possible ways. The question arises how our robot will make intelligent decision to select the best way?? The answer to the question is search algorithm and planning algorithm.

In search algorithm our robot thinks of all possible ways and maximize the parameters that are concerned with the performance of the system. In our case of driver robot it will select the shortest path or maybe consider about more parameters like fuel efficiency i.e. the shortest path may have some hurdles due to which car can’t be driven fast.

3.2.5) UTILITY BASED PROGRAMMING In this style of programming we associate a degree of happiness i.e. utility after the task has accomplished. Here we also think about if the task was performed in the best possible way or not. And this makes a robot smarter 3.2.6) NEURAL NETWORKS1

Neural networks are not much different from the above discussed methods logically. But it is quite different if we think about programming style. Things that we are doing here is also same logically. We are concerned with all the things that are stated above. The only thing we are doing is imitation our own brain networks and making our programming easier and more effective and learnable.

The human brain uses a web of interconnected processing elements called neurons to process information. Each neuron is autonomous and independent; it does its work asynchronously, that is, without any synchronization to other events taking place.

A neural network is a computational structure inspired by the study of biological neural processing. There are many different types of neural networks, from relatively simple to very complex, just as there are many theories on how biological neural processing works.

A layered feed-forward neural network has layers, or subgroups of processing elements. A layer of processing elements makes independent computations on data that it receives and passes the results to another layer. The next layer may in turn make its independent computations and pass on the results to yet another layer. Finally, a subgroup of one or more processing elements determines the output from the network. Each processing element makes its computation based upon a weighted sum of its inputs. The first layer is the input layer and the last the output layer. The layers that are placed between the first and the last layers are the hidden layers. The processing elements are seen as units that are similar to the neurons in a human brain, and hence, they are referred to as cells, neuromimes, or artificial neurons. A threshold function is sometimes used to qualify the output of a neuron in the output layer. Even though our subject matter deals with artificial neurons, we will simply refer to them as neurons. Synapses between neurons are referred to as connections, which are represented by edges of a directed graph in which the nodes are the artificial neurons.

Page 17: Robotics and Neural Networks(23nov)

12 | Page

Fig5 network with one layer

A layered feed-forward neural network. The circular nodes represent neurons. Here there are three layers, an input layer, a hidden layer, and an output layer. The directed graph mentioned shows the connections from nodes from a given layer to other nodes in other layers

The key concept of neural network are the weights that enables a program to get trained.

WEIGHTS The weights used on the connections between different layers have much significance in the working of the neural network and the characterization of a network. The following actions are possible in a neural network:

1. Start with one set of weights and run the network. (NO TRAINING)

2. Start with one set of weights, run the network, and modify some or all the weights, and run the network again with the new set of weights. Repeat this process until some predetermined goal is met. (TRAINING)

TRAINING Since the output(s) may not be what is expected, the weights may need to be altered. Some rule then needs to be used to determine how to alter the weights. There should also be a criterion to specify when the process of successive modification of weights ceases. This process of changing the weights, or rather, updating the weights, is called training. A network in which learning is employed is said to be subjected to training. Training is an external process or regimen. Learning is the desired process that takes place internal to the network

3.2.7) NEURAL NETWORK CONSTRUCTION There are three aspects to the construction of a neural network:

1. Structure—the architecture and topology of the neural network

2. Encoding—the method of changing weights

3. Recall—the method and capacity to retrieve information

1)STRUCTURE relates to how many layers the network should contain, and what their functions are, such as for input, for output, or for feature extraction. Structure also encompasses how interconnections are made between neurons in the network, and what their functions are.

Page 18: Robotics and Neural Networks(23nov)

13 | Page

2) ENCODING refers to the paradigm used for the determination of and changing of weights on the connections between neurons. In the case of the multilayer feed-forward neural network, you

initially can define weights by randomization. Subsequently, in the process of training, you can use the backpropagation algorithm, which is a means of updating weights starting from the output backwards. When you have finished training the multilayer feed-forward neural network, you are finished with encoding since weights do not change after training is completed

3)RECALL refers to getting an expected output for a given input. If the same input as before is presented to the network, the same corresponding output as before should result. The type of recall can characterize the network as being autoassociative or heteroassociative.

Autoassociation is the phenomenon of associating an input vector with itself as the output, whereas heteroassociation is that of recalling a related vector given an input vector. You have a fuzzy remembrance of aphone number. Luckily, you stored it in an autoassociative neural network. When you apply the fuzzy remembrance, you retrieve the actual phone number. This is a use of autoassociation.

3.2.8) A FEED FORWARD NETWORK

A sample feed- forward network, as shown in Figure 1.2, has five neurons arranged in three layers: two neurons (labeled x1 and x2) in layer 1, two neurons (labeled x3 and x4) in layer 2, and one neuron (labeled x5) in layer 3. There are arrows connecting the neurons together. This is the direction of information flow. A feed-forward network has information flowing forward only. Each arrow that connects neurons has a weight associated with it (like, w31 for example). You calculate the state, x, of each neuron by summing the weighted values that flow into a neuron. The state of the neuron is the output value of the neuron and remains the same until the neuron receives new information on its inputs

Fig6 A feed forward network

For example, for x3 and x5:

X3 = w23 x2 + w13 x1

X5 = w35 x3 + w45 x4

We present information to this network at the leftmost nodes (layer 1) called the input layer. We take information from any other layer in the network, but in most cases do so from the rightmost node(s), which make up the output layer. Weights are usually determined by a supervised training algorithm, where you present examples to the network and adjust weights appropriately to achieve a desired response. Once you have completed training, you can use the network without changing weights, and note the response for inputs that you apply. Note that a detail not yet shown is a nonlinear scaling function that limits the range of the weighted sum. This scaling function has the effect of clipping very large values in positive and negative directions for each neuron so that the

cumulative summing that occurs across the network stays within reasonable bounds. Typical real number ranges for neuron inputs and outputs are –1 to +1 or 0 to +1. Now let us contrast this

Page 19: Robotics and Neural Networks(23nov)

14 | Page

neural network with a completely different type of neural network, the Hopfield network, and present some simple applications for the Hopfield network

3) ACTING UPON THE ENVIRONMENT THROUGH THE EFFECTORS5

3.1)Effectors / Actuators • Effectors: The component of a robot that has an effect.

• Actuator: An actuator is the mechanism that enables the effectors to execute some work. (active ~ passive )

• Effectors: Actuator :: Hands: Muscles (tendons)

3.2)Types of Actuators • Electric Motor – electrical to mechanical energy

• Hydraulics: fluid pressure (large, dangerous, needs good packing)

• Pneumatic: air pressure, very powerful

• Photo reactive/ chemical reactive/ thermal/ piezoelectric

3.3)Kinematic • Manipulator (links + joints)

• Kinematic chain (series of kinematic pairs)

• Forward kinematics vs Inverse kinematics

Fig7 A manipulator

Page 20: Robotics and Neural Networks(23nov)

15 | Page

Other basic joints Revolute Joint

1 DOF ( Variable – Y)

Fig8 A revolute joint

Prismatic Joint1 DOF (linear) (Variables - d)

Fig9 A prismatic joint

Spherical Joint

3 DOF ( Variables - Y1, Y2, Y3)

Fig10 A spherical joint

Page 21: Robotics and Neural Networks(23nov)

16 | Page

3.4)Actuator Types • Electrical • Hydraulic • Pneumatic • Others

Electrical actuators are best of all • easy to control • from mW to MW • normally high velocities 1000 - 10000 rpm • several types • accurate servo control • ideal torque for driving • excellent efficiency • autonomous power system difficult

CONCLUSIONS :- With the help of neural networks smart robots can be built that can help us in many ways like in home, industries, hospitals, astronomy, military etc. Robots are our very good friend that makes our life easier and happy. They can do works that would never be possible by human efforts alone like in space exploration or industries etc. Robots can reach to many places that might be dangerous for human like close to nuclear reactor chamber to check the safety. Robots are fast and accurate. These key properties reduces human effort to a great extent and increase the productivity and accuracy of the product. Making robots that has its own thinking is still a matter of research. Although we have made robots that can learn on its own , still it is the topic of interest to make a robot think on its own. Whatever be the status of robots in our society but one thing is sure that in near future we’ll be greatly surrounded by robots.

Page 22: Robotics and Neural Networks(23nov)

17 | Page

references

1) C++ Neural Networks and Fuzzy Logic by Valluru B. Rao(third edition)( published by M&T Books, IDG

Books Worldwide, Inc. on 06/01/95) 2) Artificial Intelligence(A Modern Approach) by Stuart J. Russell and Peter Norvig(second edition) (published

by Prentice Hall, Englewood Cliffs, New Jersey 07632) 3) CEG 4392 Computer Systems Design Project 4) http://www.melbpc.org.au/pcupdate/2205/2205article10.htm (4 nov 2012) 5) Introduction to Robotics by Dr Suprava Patnaik