Top Banner
People’s Democratic Republic of Algeria Ministry of Higher Education and Scientific Research University M’Hamed BOUGARA – Boumerdes Institute of Electrical and Electronic Engineering Department of Power and Control Final Year Project Report Presented in Partial Fulfilment of the Requirements for the Degree of MASTER In Electrical and Electronic Engineering Option: Control Title: Presented by: - ELGHERIBI Abdellah Nabih - BOUAMER Tarek Supervisor: Mr. R. GUERNANE Design Of A Digital Control System And Path Planning Module For The ED-7220C Robot Arm Registration Number: 2014/2015
66

Design Of A Digital Control System And Path Planning Module For The ED-7220C Robot Arm

Dec 16, 2015

Download

Documents

Tarek Bouamer

This Project has the objective to design and Implement a graphical software interface to program typical tasks for a robot arm, these includes pick-and-place, trajectory tracking or basic assembly.

1. Implement the power drive electronics for the five axes of the ED-7220C arm plus gripper.
2. Design an embedded digital control system for robot positioning.
3. Design a Graphical user interface running on a remote PC using the ROS platform to command the robot.
4. Implement a motion planning algorithm to automatically generate robot trajectories that satisfy the given task while avoiding obstacles.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Peoples Democratic Republic of Algeria Ministry of Higher Education and Scientific Research

    University MHamed BOUGARA Boumerdes

    Institute of Electrical and Electronic Engineering

    Department of Power and Control

    Final Year Project Report Presented in Partial Fulfilment of the Requirements for the Degree of

    MASTER In Electrical and Electronic Engineering

    Option: Control

    Title:

    Presented by: - ELGHERIBI Abdellah Nabih - BOUAMER Tarek

    Supervisor: Mr. R. GUERNANE

    Design Of A Digital Control System And Path Planning Module For The ED-7220C Robot Arm

    Registration Number: 2014/2015

  • To our families, and friends

  • ACKNOWLEDGEMENT

    At the very outset, all our prayers and thankfulness are to Allah the almighty for facilitating this

    work and for granting us the opportunity to be surrounded by great and helpful people at IGEE

    Institute.

    We would like to express our everlasting gratitude to our supervisors, Dr. Reda GUERNANE for his

    valuable encouragement, guidance and monitoring, without which this work would not have reached

    the point of fruition, so we ask Allah to reward him on our behalf.

    Our warm heart, our fathers, mothers deserve all the credit here; they have been a source of

    inspiration to us for years. We would never forget their continuous prayer for the sake of our

    success.

    No acknowledgement would be complete without expressing our appreciation and thankfulness for

    the maintenance laboratory workers for their support and help.

  • Abstract

    This Project has the objective to design and Implement a graphical software interface to program

    typical tasks for a robot arm, these includes pick-and-place, trajectory tracking or basic assembly.

    The project should go through the following steps:

    1. Implement the power drive (PWM or linear) electronics for the five exes of the ED-7220C

    arm plus gripper.

    2. Design an embedded digital control system for robot positioning.

    3. Design a Graphical user interface running on a remote PC using the ROS platform to

    command the robot.

    4. Implement a motion planning algorithm to automatically generate robot trajectories that

    satisfy the given task while avoiding obstacles.

  • TABLE OF CONTENTS

    ACKNOWLEDGEMENT..II

    ABSTRACT...III

    TABLE OF CONTENTS...IV

    LIST OF FIGURES.........V

    LIST OF TABLES.VI

    INTRODUCTION...1

    1. MOTIVATION.2 2. PROBLEM STATEMENT AND GOAL.2 3. SYSTEM OVERVIEW3 4. ORGANIZATION OF THE REPORT ....3

    CHAPTER I ROBOT KINEMATICS...5

    1.1. INTRODUCTION.6 1.2. DIRECT/ FORWARD KINEMATICS.7

    1.2.1. Assigning the coordinate frames..9 1.2.2. ED-7220C DH parameters...........................................................................................................9

    1.3. INVERSE KINEMATICS...12 1.3.1. Geometric approach....12 1.3.2. Analytical/algebraic approach14 1.4 CONCLUSION16

    CHAPTER II DYNAMIC AND CONTROL17

    2.1. CONTROL TECHNIQUE..18 2.2. INDEPENDENT JOINT CONTROL.....19

    2.2.1. Cascade PID compensation ...20 2.2.2. FeedForward disturbance torque cancellation21

    2.3 CONCLUSION.23

    CHAPTER III ROBOT TRAJECTORY & MOTION PLANNING...24

    3.1 INTRODUCTION....25 3.2 LINEAR SEGMENT METHOD WITH PARABOLIC BLENDS (LSPB).....27 3.3 ED-7220C MOTION PLANNING...30 3.4 CONCLUSION.....33

    CHAPTER IV RESULTS AND DESCUSSION.....34

    4.1 HARDWARE IMPLEMENTATION...35 4.2 MODELING THE ED-7220C ROBOT IN ROS..36 4.3 FORWARD KINEMATICS.....36 4.4 CONTROL DESIGN....39 4.5 TRAJECTORY GENERATION......42 4.6 MOTION PLANNING.48

    CONCLUSION..53 REFERENCES.......54APPENDIX A56

  • LIST OF FIGURES

    Figure (I.1): System block diagram...3 Figure (1.1): ED-7220C and its kinematics model.6 Figure (1.2): Kinematics block diagram.7 Figure (1.3): DH frame assignment7 Figure (1.4): ED-7220C robot arm frame assignment8 Figure (1.5): Top view of robot13 Figure (1.6): Planar view of ED-7220C robot arm...14 Figure (2.1): Control techniques analysis of manipulators...18 Figure (2.2): Control structure for a given robot joint ....19 Figure (2.3): Block diagram of closed loop control system.....21 Figure (2.4): Estimator design......21 Figure (2.5): 90 Configuration of ED-7220C robot arm ....21 Figure (3.1): Trajectory planning block diagram.26 Figure (3.2): Typical joint space trajectory..26 Figure (3.3): Blend times for LSPB trajectory.27 Figure (3.4): Trajectory using LSPB30 Figure (3.5): Trajectory generation and motion planning diagram .....31 Figure (3.6): RRT flowchart.32 Figure (3.7): Basic RRT exploring phase of 5 DOF robot arm33 Figure (4.1): The overall system structure...35 Figure (4.2): The 3D model of ED-7220C in ROS..36 Figure (4.3): Forward Kinematic window for the initial configuration...37 Figure (4.4): Initial configuration of ED-7220C..37 Figure (4.5): FK window for the selected configuration at position (0.03, -41.58, 19.19).38 Figure (4.6): Robot position at (0.03, -41.58, 19.19) in ROS..38 Figure (4.7): The actual response of the base joint to 50 setpoint..40 Figure (4.8): The actual response of the wrist joint to 100 setpoint...40 Figure (4.10): Shoulder torque rejecter (case1) from 110 to 90.41 Figure (4.11): Shoulder torque rejection (case 2) from 90 to 110..41 Figure (4.12): LSPB trajectory curves of the base.43 Figure (4.13): LSPB trajectory curves of the shoulder..44 Figure (4.14): LSPB trajectory curves of the elbow..45 Figure (4.15): LSPB trajectory curves of the wrist46 Figure (4.16): End effector generated path in XYZ plane.48 Figure (4.17): Planned scene with obstacle49 Figure (4.18): Pick and place position in Rviz...49 Figure (4.19): Waypoints of the planned path in black trial (without obstacle)49 Figure (4.20): End-effector curve of the planned path...50 Figure (4.21): The RRT planned path with obstacle avoidance.50 Figure (4.22): End-effector curve of the planned path with obstacle avoidance...50 Figure (A.1): L298 IC H-bridge56 Figure (A.2): H-bridge structure and its working principle..56 Figure (A.3): The circuit design in proteus and its 3D PCB.57 Figure (A.4): Arduino Due microcontroller..58 Figure (A.5): ED-7220C workspace.59 Figure (A.7): Forward Kinematic Window...60

  • LIST OF TABLES

    Table (1.1): DH parameter for ED-7220C robot arm................................................................................10 Table (1.2): The link lengths ..10 Table (2.1): Shoulders position with corresponding estimated torque..22 Table (2.2): Elbows position with corresponding estimated torque..23 Table (4.1): Links colors reference36 Table (4.2): Initial configuration of ED-7220C..36 Table (4.3): Computed and measured results..39 Table (4.4): Motors dead zone...39 Table (4.5): Generated trajectory Waypoints..47 Table (A.5): Readings of joints encoders58 Table (A.6): ED-7220C range of motion.59

  • Introduction

  • Introduction 1 Motivation

    In recent years, industrial and commercial systems with high efficiency and great performance have taken advantages of robot technology. Large number of control researches and numerous control applications were presented during the last years, concentrated on control of robotic systems. Robot manipulator field is one of the interested fields in industrial, educational and medical applications. It works in unpredictable, hazard and inhospitable circumstances which human cannot reach. For example, working in chemical or nuclear reactors is very dangerous, while when a robot instead human it involves no risk to human life. Therefore, modeling and analysis of the robot manipulators and applying control techniques are very important before using them in these circumstances to work with high accuracy.

    In Algeria, many industrial applications can utilize robot technology and develop robot manipulators. It is an attractive field to be applied and developed for industrial applications. This thesis is meant to be suitable for these applications. On the other side, some universities and colleges offers, some courses related to robotics. These courses mainly focus on the theoretical concepts without giving much attention for controlling different robot manipulators in the practical side. This thesis may be considered as a valuable educational tool in their laboratories. 2 Problem statement and goal

    One of the key components of automation and industrial control are robot manipulators. The main problem with commercial manipulators solutions is that they require (or come with) proprietary (Black-Box) and expensive Control hardware (modules) and programming environments. Our goal is to design a robot manipulator system and demonstrate an easy to use graphical task programming interface.

    To start, we would need a robot arm, and since as mentioned previously, industrial arms are

    beyond most universities budgets, we resorted to the inexpensive ED-7220C 5-axes educational robot arm available at our institute. Although less precise and powerful than the professional counterparts, it nevertheless can demonstrate most industrial manipulator tasks.

    Notice that even though the ED-7220C is educational, its Control hardware and software are

    unfortunately closed-source and thus cannot be modified or improved upon. Using the robot arm only, our goal is to replace its drive/control box and software with an open one; this requires going thought the following development steps:

    Mathematical analysis for the forward and inverse kinematics of the robot. Design the Actuators Power drive system. Design a digital position controller for the robot arm joints.

    2

  • Introduction

    Design an interface to generate and simulate robot trajectories for pick-and-place task in a cell environment with obstacles.

    Integrate the interface with the robot control system. 3 System overview

    The complete system block diagram shown in Figure (I.1) consists of:

    Personal Computer: This is where the high-level task programming is performed (offline), this contains The Graphical User Interface (GUI) which supplies the desired trajectory to the controller.

    Arduino Due microcontroller: this where the robot arm controller arm is executed, it takes input from the PC and feedback signals from the arm and issue the command signal to this latter (online)

    The ED-7220C Robot arm: reacts to the control signals and provides feedback to the Arduino controller.

    The Graphical User Interface (GUI) designed by Processing Software. This software consists

    of two parts; forward and inverse kinematics. The forward kinematics consists of finding the position of the end-effector in the space knowing the movements of its joints. The inverse kinematics consists of the determination of the joint variables corresponding to a given end-effector position and orientation. Path is defined as sequence of robot configurations in particular order without regard for timing of these configurations, trajectory is concerned about when each part of the path must be obtained thus specifying timing.

    Figure (I.1): System block diagram

    Serial communication is the simplest way to communicate between two devices. A serial interface is established through a serial port object, which can be created using the SERIAL function. The main function of the Arduino Due microcontroller is making interface between PC and ED-7220C robot arm by receiving data from serial port and processing it, then sending a control signal to the arm servo motors then feeding the data from servo motors encoders back to the Arduino. 4 Organization of the report This report is organized as follows:

    3

  • Introduction Chapter 1 brings the analysis of the ED-7220C kinematics (forward and backward), while chapter 2 discusses the low level problem of robot dynamics, and the control approach used to position the arm. Chapter 3 moves to the higher level problem of path and trajectory planning. Practical results and discussion appears in chapter 4. We finish with a general conclusion, appendices and the references used throughout our work.

    4

  • ROBOT KINEMATICS

  • Robot kinematics Chapter 1 1.1 Introduction

    Kinematics is the description of motion without regard to the forces that cause it. It deals

    with the study of position, velocity, acceleration, and higher derivatives of the position variables.

    Figure (1.1): ED-7220C and its kinematics model

    In robot simulation, system analysis needs to be done, such as the kinematics analysis; its purpose is to carry through the study of the movements of each part of the robot mechanism and its relations between itself. The kinematics analysis is divided into forward and inverse analysis. The forward kinematics consists of finding the position of the end-effector in the space knowing the movements of its joints as F(1,2 ,..,n ) = [x, y, z, R], and the inverse kinematics consists of the determination of the joint variables corresponding to a given end-effector position and orientation as F(x, y, z, R) = 1, 2,, n . Figure (1.1) above shows the ED-7220C arm along with it abstract kinematics view, Figure (1.2) below shows a simplified block diagram of kinematic modeling.

    6

  • Robot kinematics Chapter 1

    Figure (1.2): Kinematics block diagram

    A commonly used convention for selecting frames of reference in robotic applications is the

    Denavit Hartenberg or D-H convention as shown in Figure (1.3). In this convention each homogenous transformation Ti is represented as a product of "four" basic transformations. Tii1 = Rot (z, i) Trans (z, di) Trans (x, ai) Rot (x, i) (1.1) Where the four quantities i, ai, di,i are the parameters of link i and joint i.

    Figure (1.3): DH frame assignment

    Where the notation Rot (x, i) stands for rotation about xi axis by i , Trans (x, ai) is translation along xi axis by a distance ai, Rot(z, i) stands for rotation about zi axis by i, and Trans (z, di) is the translation along zi axis by a distance di.

    1.2 Direct/Forward Kinematics

    The forward kinematics problem can be stated as follows: Given the joint variables of the robot, determine the position and orientation of the end-effector in a workspace frame. Since each

    7

  • Robot kinematics Chapter 1 joint has a single degree of freedom, the action of each joint can be described by a single number, i.e. 1, 2,, the angle of rotation in the case of a revolute joint. The objective of forward kinematic analysis is to determine the cumulative effect of the joint variables.

    Suppose a robot has n+l links numbered from zero to n starting from the base of the robot, which is taken as link 0. The joints are numbered from 1 to n, and zi is a unit vector along the axis in space about which the links i-1 and i are connected. The i-th joint variable is denoted by q, In the case of a revolute joint, q, is the angle of rotation, while in the case of a prismatic joint q, is the joint translation. Next, a coordinate frame is attached rigidly to each link. To be specific, we choose frames 1 through n such that the frame i is rigidly attached to link i. Figure (1.4) illustrates the idea of attaching frames rigidly to links in the case of an ED-7220C robot.

    Figure (1.4): ED-7220C robot arm frame assignment 1 is a homogenous matrix which is defined to transform the coordinates of a point from frame i

    to frame i-1. The matrix 1 is not constant, but varies as the configuration of the robot is changed. However, the assumption that all joints are either revolute or prismatic means that 1 is a function of only a single joint variable, namely qi. In other words,

    1 = 1() (1.2)

    The homogenous matrix that transforms the coordinates of a point from frame i to frame j is denoted by

    (i>j). Denoting the position and orientation of the end-effector with respect to the inertial or the base frame by a three dimensional vector 0 and a 3x3 rotation matrix 0 , respectively, we define the homogenous matrix.

    0 = 0 00 1 (1.3)

    8

  • Robot kinematics Chapter 1 Then the position and orientation of the end-effector in the inertial frame are given by

    0(1, 2, ) = 10(1) 21(2) 1() (1.4)

    Each homogenous transformation 1 is of the form

    1 = 1 10 1 (1.5)

    Hence

    = +1 1 = 0 1 (1.6)

    The matrix

    expresses the orientation of frame i relative to frame j (i > j) and is given by the rotational parts of the

    matrices (i > j) as

    = +1 1 (1.7)

    The vectors

    (i > j) are given recursively by the formula

    = +1 + 1 1 (1.8)

    1.2.1. Assigning the Coordinate Frames

    ED-7220C has five rotational joints and a moving grip as shown in Figure (1.4). Joint 1 represents the base and its axis of motion is z1. This joint provides a rotational 1 angular motion around z1 axis in x1y1 plane. Joint 2 is identified as the shoulder and its axis is perpendicular to Joint 1 axis. It provides a rotational 2 angular motion around z2 axis in x2y2 plane. z3 axes of Joint 3 (elbow) and Joint 4 (Wrist) are parallel to Joint 2 z-axis; they provide 3 and 4 angular motions in x3y3 and x4y4 planes respectively. Joint five are identified as the grip rotation. Its z5 axis is vertical to z4 axis and it provides 5 angular motions in x5y5 plane. 1.2.2. ED-7220C DH Parameters

    The Denavit-Hartenberg analysis is one of the most used, in this method the direct kinematics is determined from some parameters that have to be defined, depending on each mechanism. However, it was chosen to use the homogeneous transformation matrix. In this, analysis, once it is easily defined one coordinate transformation between two frames, where the

    9

  • Robot kinematics Chapter 1 position and orientation are fixed one with respect to the other it is possible to work with elementary homogeneous transformation operations. D-H parameters for ED-7220C defined for the assigned frames in Table 1.1.

    Table (1.1): DH parameter for ED-7220C robot arm

    i 1 0 0 d1 1 2 90 a1=2.5 0 2 3 0 a2 0 3 4 0 a3 0 4 5 90 0 d5 5 6 0 0 0 6

    Table (1.2): The link lengths

    joint Waist Shoulder Elbow Wrist

    Symbol d1 a2 a3 d5 Link

    Length(cm) 37 22.0 22.0 15.0

    Using the parameter of table (1.1), the transformation matrices T1 to T6 can be obtained as shown below. For example, T1 shows the transformation between frames 0 and 1 (designating etc.).

    10 = 1 0 1 111 0 1 1100 10 0 1 0 0 (1.9)

    21 = 2 2 0 222 2 0 2200 00 1 00 1 (1.10)

    32 = 3 3 0 333 3 0 3300 00 1 0 0 1 (1.11)

    10

  • Robot kinematics Chapter 1

    43 = 4 0 4 04 0 4 000 10 0 0 0 1 (1.12)

    54 = 5 5 0 05 5 0 000 00 1 5 0 1 (1.13)

    Using the above values of the transformation matrices; the link transformations can be concatenated (multiplied together) to find the single transformation that relates frame (5) to frame (0):

    50 = 1021324354 = 0 0 0 1 (1.14)

    The transformation given by equation (1.14) is a function of all 5 joint variables. From the robots joint position, the Cartesian position and orientation of the last link may be computed using above equation (1.14).

    The first three columns in the matrices represent the orientation of the end effectors, whereas the last column represents the position of the end effectors. The orientation and position of the end effectors can be calculated in terms of joint angles using:

    = 12345 + 15 = 12345 15 (1.15) = 2345 = 12345 + 15 = 12345 15 (1.16) = 2345 = 2341 = 2341 (1.17) = 234 = 1(2345 + 323 + 22 + 1) = 1(2345 + 323 + 22 + 1) (1.18) = 2345 + 323 + 22 + 1

    11

  • Robot kinematics Chapter 1 1.3 Inverse kinematics

    Inverse Kinematics (IK) analysis determines the joint angles for desired position and orientation in Cartesian space. Total transformation matrix in equation. (1.14) will be used to calculate inverse kinematics equations. IK is more difficult problem than forward kinematics.

    The solution of inverse kinematic is more complex than direct kinematics and there is not any global analytical solution method. Each manipulator needs a particular method considering the system structure and restrictions. There are two solutions approaches namely, geometric and algebraic used for deriving the inverse kinematics solution. Lets start with geometric approach. 1.3.1. Geometric Approach

    Using IK-Cartesian mode, the user specifies the desired target position of the gripper in Cartesian space as (x, y, z) where z is the height, and the angle of the gripper relative to ground, (see Figure 1.6), is held constant. This constant allows users to move objects without changing the objects orientation (the holding a cup of liquid scenario). In addition, by either keeping fixed in position mode or keeping the wrist fixed relative to the rest of the arm, the inverse kinematic equations can be solved in closed form as we now show for the case of a fixed .

    The lengths d1, a2, a3 and d5 correspond to the base height, Shoulder length, Elbow length and gripper length, respectively are constant. The angles 1, 2, 3, 4 and 5 correspond to Base rotation, Shoulder, Elbow, wrist, and end effector, respectively. These angles are updated as the specified position in space changes. We solve for the joint angles of the arm, 1:4 given desired position (x, y, and z) and which are inserted by the user. From Figure (1.5), we clearly see that 1= atan2(y, x) and the specified radial distance from the base d are related to x and y by: = 2 + 2

    = 1 (1.19) = 1

    12

  • Robot kinematics Chapter 1

    Figure (1.5): Top view of robot

    Moving now to the planar view in Figure (1.6), we find a relationship between joint angles 2, 3 and 4 and as follows:

    = 1 + 2 + 3 (1.20) Since is given, we can calculate the radial distance and height of the wrist joint:

    4 = 5cos () 4 = 5sin()

    Or (1.21) 4 = 2 cos(2) + 3 cos(2 + 3)

    4 = 2 sin(2) + 3 sin(2 + 3) + 1 Now we want to determine 2 and 3. We first solve for , and s (from Figure 1.6) uses the law of cosines as: = 2(2+22 32, 22s)

    = 2(4 1, 4) (1.22) = (4 1)2 + 42

    With these intermediate values, we can now find the remaining angle values as:

    2 = 3 = 2(2 22 32, 223) (1.23)

    4 = 2 3

    13

  • Robot kinematics Chapter 1

    Figure (1.6): Planar view of ED-7220C robot arm

    1.3.2. Analytical (algebraic) Approach Using the X, Y and Z resultants gotten in the direct kinematics: = 1[22 + 323 + 5234 + 1] (1.24) = 1[22 + 323 + 5234 + 1] (1.25) = [5234 + 323 + 22] + 1 (1.26) The simplified equation is gotten: (1.24)2 + (1.25)2 22 + 323 = ( 11)2 + ( 11)2 5234 (1.27) The first joint movement, defined by 1, can be calculated using geometric parameters only:

    1= atan2(y, x) Now we can calculate 3, by using equation (1.26): (1.26) 22 + 323 = 5234 1 (1.28)

    (1.27)2 + (1.28)2 3 = (52341)2+(11)2+(11)2523422232223 (1.29)

    = 234, = 234

    14

  • Robot kinematics Chapter 1

    3 = 512+(11)2+(11)2522232223 (1.30) 3 = 1 32

    3 = 2(3, 3) (1.31)

    After calculate 3 we can find 2 by:

    2 = = 2( 5 1, ( 11)2 + ( 11)2 5) (1.32)

    = 2(33, 2 + 33) 2 = 2( 5 1, ( 11)2 + ( 11)2 5) 2(33,2 + 33) (1.33)

    = 2 + 3 + 4 (1.34) 4 = 2 3 (1.35) We can find 5 by using total transformation matrix in equation (1.14):

    5 = 111 121

    5 = 112 122

    11 = (123 123)4 + (123 123)45 + 15 (1.36)

    12 = (123 123)4 + (123 123)45 + 15

    21 = (123 123)4 + (123 123)45 15

    22 = (123 123)4 + (123 123)45 15

    5 = 2(5, 5) (1.37)

    15

  • Robot kinematics Chapter 1 1.4 Conclusion

    This chapter described the Kinematic Model of ED-7220C robot arm by deriving its DH-parameters. This model makes it possible to control the manipulator in joint space or Cartesian space to achieve any reachable position and orientation; However, the inverse kinematic is a very tedious task because it results in nonlinear equations having sometimes no solution, unique solution or multiple solutions.

    16

  • ROBOT DYNAMICS AND

    CONTROL

  • Robot Dynamics And Control Chapter 2 2.1 Control Technique

    There are two well-known control techniques and methodologies that can be applied to the control of manipulators, coupled and decoupled control techniques. The particular control method chosen as well as the manner in which it is implemented can have a significant impact on the performance of the manipulator and consequently on the range of its possible applications.

    Coupled control technique or multivariable control considers the system as one box of an

    interconnected chain of ideal rigid bodies with a generalized force acting at the joints. Decoupled control technique or independent joint control considers each axis of the manipulator is controlled as a single-input/single-output (SISO) system. Any coupling effect in independent control technique due to the motion of the other links is treated as a disturbance.

    The motion of n degree freedom robot manipulator in coupled control technique can be

    described by the following set of differential equations [1]. () + (, ) + () = (2.1)

    Where the matrix is called the inertia matrix that is a symmetric positive definite matrix, is vector giving the direction of gravity in the inertial frame, and is matrix defined as Christoffel Symbols of the first kind; however, the equation (2.1) is extremely complicated, and it is difficult the determine the matrices. The figure (2.1) demonstrates control techniques of manipulators.

    Figure (2.1): Control techniques analysis of manipulators.

    The disadvantages of coupled control technique are:

    Extremely complicated and difficult to determine the parameter of the equation (2.1). There are a number of dynamic effects that are not included in the equation (friction

    at the joints).

    18

  • Robot Dynamics And Control Chapter 2

    The equation does not give detailed analysis of robot dynamics such as elastic deformation of bearing and gears, deflection of the links under load, and vibration.

    In this thesis we consider the more practical type of control strategy, namely, decoupled or

    independent joint control. 2.2 Independent joint control

    Various controllers have been designed and applied in the robot manipulator. Feedback control may be the most widely used controller in the industrial and commercial applications for the early decades, due to its simplicity and cost of implementation.

    However, Feedback control alone in general is not sufficient in the presence of varying

    disturbances; this is particularly the case in independent robot joint control. Here, the load on a controlled joint is affected by:

    - Gravity, which opposes ascending motion and helps descending motion, thus is it seen as

    varying - The configuration and the motion of the rest arm can also be seen as varying disturbance.

    As a result, in addition to feedback, we need a second degree-of-freedom (dof) control to compensate for the disturbances, the latter of which is referred to as feed-forward. In summary our independent joint control is 2-dofs (figure (2.2)), specifically, is composed of:

    1. Cascade compensator: this is an in-loop compensator and it is used to achieve adequate control performance (steady-state error, transient error). A suitable type is a PID controller.

    2. Feedforward compensator: this is an out-of-loop compensator, and it is used to compensated (cancel) for the undesirable disturbances, a special type is demonstrated in the corresponding section.

    Figure (2.2): Control structure for a given robot joint

    19

  • Robot Dynamics And Control Chapter 2 2.2.1 Cascade PID compensation

    PID controllers are considered the most control technique that are widely used in control and robotics applications. PID control offers easy ad-hoc methods for tuning its parameters without precise knowledge of the plant model. Their algorithm is explained below.

    As shown in Figure 2.3, the error signal, e(t) , is the difference between the set point, r(t) , and the process output, y(t) . If the error between the output and the input values is large, then large input signal u(t) is applied to the physical system. If the error is small, a small input signal is used. As its name suggested, any change in the control signal, u(t) is directly proportional to change in the error signal for a given proportional gain Kp. Mathematically the output of the proportional controller is given as follows: () = () (2.2) Where e(t) , the error signal and Kp is the proportional parameter.

    Proportional term is not sufficient to be a controller in practical cases to meet a specified requirement (e.g. small overshoot, good transient response) because the large proportional gain gives fast rising time with large overshoot and oscillatory response. Therefore, a derivative term is added to form PD controller, which tends to adjust the response as the process approaches the set point. The output of PD controller is calculated based on the sum of both current error and change of error with respect to time. The effects of PD give a slower response with less overshoot than a proportional controller only. Mathematically, PD controller is represented as:

    () = () + () (2.3)

    The main function of the third term; integral control tends to reduce the effect of steady state error that may be caused by the proportional gain, where a smaller integration time result is the faster change in the controlled signal output. The general form of the PID controller in continuous time formula given as:

    () = () + () + ()0 (2.4)

    Each term of the three components of PID controller, is amplified by an individual gain, the sum of the three terms is applied as an input to the plant to adjust the process. It will be noted that the purely derivative or integral plus derivative variations never used. In all cases except proportional control, the PID compensator gives at least one pole and one zero.

    20

  • Robot Dynamics And Control Chapter 2

    21

    Figure (2.3): Block diagram of closed loop control system 2.2.2 FeedForward disturbance torque cancellation

    The notion of feed forward estimator has been introduced as a method to reject time varying but partially known disturbances (here a torque). Suppose that ()is an arbitrary reference Input.

    The main role of the compensator is estimate and anticipates the torque at different arms positions in order to operate the arm. In this thesis an estimator has been applied only to the joints where the effect of the torque is considerable, for example, the effect of the torque on the base and wrist is not considerable, but on shoulder and elbow is so.

    To the way the feedforward compensator is designed is illustrated for the shoulder of ED-

    7220C joint, the same approach was used for the elbow. First, consider the figure (2.5).

    Figure (2.4): Estimator design Figure (2.5): 90 Configuration of ED-7220C robot arm

    To identify the estimator, the system in figure (2.4) was used to control the shoulder and the following steps have been followed:

    At 90 shoulders position as in figure (2.5) we look for the required torque 1(90)that just moves the shoulder.

  • Robot Dynamics And Control Chapter 2

    22

    Choosing another position between 90 and 35 (35<

  • Robot Dynamics And Control Chapter 2 Identifying the estimator of the elbow follows the same procedures as the shoulder taking in consideration the position of the elbow with respect to the shoulder and elbows positions limitations.

    Table (2.2) shows different elbows position with corresponding estimated torque.

    Table (2.2): Elbows position with corresponding estimated torque

    Elbow Position Torque Estimator Elbows position

    from 90 to -10

    = 0.12 ( 90) + 50

    Elbows position from

    -10 to 90

    = (59 ( 90)2 + 8061 ( 90) 2250004500 Elbows position

    from -10 to -65

    = 6 ( 90) + 190513 Elbows position

    from -65 to -10

    = 7 ( 90) + 184060 Elbows position

    from 90 to 150

    = ( 90) + 2505 Elbows position

    from 150 to 90

    = 0.75 ( 90) + 50

    Elbows position from

    150 to 175

    = 2 ( 90) + 107025 Elbows position

    from 175 to 150

    = ( 90) + 4155

    2.3 Conclusion

    In this chapter we introduced 2-dofs control to compensate for the disturbances, the first control is cascade compensator that is used to achieve adequate control performance and this compensator is an in-loop compensator. The second control is feedforward compensator this an out of loop compensator used to cancel undesirable disturbances.

    23

  • ROBOT TRAJECTORY &

    MOTION PLANNIG

  • Robot Trajectory & Motion Planning Chapter 3

    25

    3.1 Introduction

    During robot motion, the robot controller is provided with a steady stream of goal positions

    and to track. This specification of the robot position as a function of time is called a trajectory. In

    some cases, the trajectory is completely specified by the task for example, when the task is simply to

    move the end-effector from one position to another in a given time, we have freedom to design the

    trajectory to meet these constraints. This is the domain of trajectory planning. The trajectory should

    be a sufficiently smooth function of time, and it should respect any given limits on joint velocities,

    accelerations, or torques.

    The goal of this chapter is to find a trajectory that connects an initial to a final configuration.

    Without loss of generality, we will consider planning the trajectory for a single joint, since the

    trajectories for the remaining joints will be created independently and in exactly the same way.

    Thus, we will concern ourselves with the problem of determining q (t), where q (t) is a scalar joint

    variable.

    We suppose that at time t0 the joint variable satisfies

    ( ) (3.1)

    ( ) (3.2)

    and we wish to attain the values at tf

    ( ) (3.3)

    ( ) (3.4)

    Figure (3.2) shows a suitable trajectory for this motion. In addition, we may wish to specify the

    constraints on initial and final accelerations. In this case, we have two additional equations.

    ( ) (3.5)

    ( ) (3.6)

    The desired path is approximated by a class of polynomial functions. It generates a sequence

    of time-based control set points for the control of manipulator from the initial configuration to its

    destination. Figure (3.1) shows the trajectory planning block diagram.

  • Robot Trajectory & Motion Planning Chapter 3

    26

    Figure (3.1): Trajectory planning block diagram

    To generate a smooth trajectory such as that shown in Figure (3.2) is by a polynomial

    function of t; however, there many trajectory planning methods require polynomial functions with

    independent variables those methods are explained briefly as follow:

    Cubic Polynomial Trajectories: this method requires a polynomial with four independent

    coefficients that can be chosen to satisfy the constraints. Thus, we consider a cubic trajectory

    of the form.

    ( )

    For distance (3.7)

    Although, a cubic trajectory gives continuous positions and velocities at the start and finish

    points times but discontinuities in the acceleration. The derivative of acceleration is called

    the jerk. A discontinuity in acceleration leads to an impulsive jerk, which may excite

    vibration modes in the manipulator and reduce tracking accuracy.

    Quantic Polynomial Trajectories: In this method, we have six constraints (one each for

    initial and final configurations, initial and final velocities, and initial and final accelerations).

    Therefore, we require a fifth order polynomial.

    ( )

    For distance (3.8)

    Figure (3.2): Typical joint space trajectory

  • Robot Trajectory & Motion Planning Chapter 3

    27

    Linear Segment Method with Parabolic Blends or (LSPB): In this project (LSPB) method

    was used and found to be suitable and appropriate when a constant velocity is desired along

    a portion of the path. This method will be explained in details in the next section.

    3.2 Linear segment method with parabolic blends (LSPB)

    The LSPB trajectory is such that the velocity is initially ramped up to its desired value and

    then ramped down when it approaches the goal position. To achieve this, we specify the desired

    trajectory in three parts. The first part from time t0 to time tb is a quadratic polynomial. This results

    in a linear ramp velocity. At time tb, called the blend time, the trajectory switches to a linear

    function. This corresponds to a constant velocity. Finally, at time tf tb the trajectory switches once

    again, this time to a quadratic polynomial so that the velocity is linear.

    Figure (3.3): Blend times for LSPB trajectory

    We choose the blend time tb so that the position curve is symmetric as shown in Figure (3.3) for

    convenience suppose that t0=0 and ( ) ( ) . Then between times 0 and tb we have.

    ( ) (3.9)

    So that the velocity is

    ( ) (3.10)

    The constraints q(0)=q0 and ( ) imply that

  • Robot Trajectory & Motion Planning Chapter 3

    28

    (3.11)

    (3.12)

    At time tb we want the velocity to equal a given constant, say V. Thus, we have

    ( ) (3.13)

    This implies that,

    (3.14)

    Therefore, the required trajectory between 0 and tb with

    is given as:

    ( )

    (3.15)

    ( )

    (3.16)

    (3.17)

    Where denotes the acceleration. Now, between time tf and tf - tb, the trajectory is a linear segment

    (corresponding to a constant velocity V)

    ( ) (3.18)

    Since by symmetry,

    (

    )

    (3.19)

    We have,

    (3.20)

    Which yields,

    (3.21)

    Since the two segments must blend at time tb we require

  • Robot Trajectory & Motion Planning Chapter 3

    29

    (3.22)

    Which gives upon solving for the blend time tb

    (3.23)

    Note that we have the constraint

    . The leads to the inequality

    ( )

    ( )

    (3.24)

    The inequality can be written in another way

    ( )

    ( )

    (3.25)

    Thus, the specified velocity must be between these limits or the motion is not possible. The portion

    of the trajectory between tf-tb and tf is now found by symmetry considerations. The complete LSPB

    trajectory is given by

    ( )

    {

    (3.26)

    Figure (3.4) shows such an LSPB trajectory q0= zero, qf = 40, and t0 =0, tf =1, where the maximum

    velocity V = 60. In this case tb=

    . The velocity and acceleration curves are given in the same Figure.

  • Robot Trajectory & Motion Planning Chapter 3

    30

    Figure (3.4): Trajectory using LSPB

    3.3 ED-7220C motion planning

    Main task of this project is to send the robot end-effector from an initial position to a goal

    position taking a minimum path while avoiding obstacle in the environment, joint, and torque limits.

    To achieve this RRT (Rapidly-exploring Random Tree) method has been used; RRT method relies

    on a random or deterministic function to choose a sample from workspace of ED-7220C. This

    sample represents the position of the end-effector in xyz plane known as C-space or state space.

    RRT evaluates whether a sample is in free space (no obstacle) and then determine nearby previous

    free-space samples; and a simple local planner to try to connect to, or move toward, the new sample.

    This process builds up a graph or tree representing feasible motions of the robot.

    Sampling methods generally give up on the resolution-optimal solutions of a grid search in

    exchange for the ability to find satisficing solutions quickly in high-dimensional state spaces. The

    samples are chosen to form a roadmap or search tree that quickly approximates the free space using

    fewer samples than would typically be required by a fixed high-resolution grid, where the number of

    grid points increases exponentially with the dimension of the search space. Most sampling methods

    are probabilistically complete: the probability of finding a solution, when one exists, approaches

    100% as the number of samples goes to infinity.

    The RRT algorithm searches for a collision-free motion from an initial position of the end-

    effector to a goal position. The figure (3.6) demonstrates the RRT algorithm applied to ED-7220C

    and figure (3.7) demonstrates the exploring phase of basic RRT.

  • Robot Trajectory & Motion Planning Chapter 3

    31

    An overview of how the ED-7220C end-effector start from initial position to the goal

    position taking the minimum path with avoiding obstacle and generation of a trajectory is shown in

    the figure (3.5)

    Figure (3.5): Trajectory generation and motion planning diagram

    Collision detection/

    Obstacle distance

    computation

    RRT Search

    Environment geometry

    Robot geometry

    Initial position of the

    end-effector

    Goal position of the

    end-effector

    So

    lutio

    n p

    ath

    Trajectory Planner qi joint variable

    Robot

    LSPB

    Time

  • Robot Trajectory & Motion Planning Chapter 3

    32

    Figure (3.6): RRT flowchart

    Initialize search tree (T) with

    initial position (Pinit) of the

    end-effector

    While tree (T) < the

    maximum size of the tree

    (Tmax)

    FAILURE

    Position sample (Psample)

    from workspace

    Look for the nearest

    position (Pnearest) in tree (T) to

    Psample

    Employ a local planner to

    find a motion from Pnearest to

    the new position (Pnew)in the

    direction of Psample

    The motion is

    collision-free

    ?

    add Pnew to tree (T) with an

    edge from Pnearest to Pnew

    Pnew is a goal

    ?

    Return SUCCESS and the

    motion to Pnew

    No

    Yes

    Yes

    No

    YesNo

  • Robot Trajectory & Motion Planning Chapter 3

    33

    Figure (3.7): Basic RRT exploring phase of a planar 5 DOF robot arm

    3.4 Conclusion

    The goal of this chapter was choosing a suitable trajectory planning method for our robot

    arm which is LSPB (Linear segment method with parabolic blends) method and then we introduced

    the RRT (rapidly-exploring random trees) method and applied it on our robot to achieve desirable

    motion of the arm with avoiding obstacles.

  • RESULTS & DISCUSSION

  • Results & Discussion Chapter 5

    In this chapter, the details of the practical and simulation results for the hardware design, 3D modeling, Kinematics, and trajectory planning are shown are discussed. 4.1 Hardware implementation:

    To achieve a control of a robot arm by using personal computer, we must make the connection between the robot and PC. This connection is called interface connection and it is done by using a microcontroller; however, the Arduino microcontroller is unable to drive the beefy six 24V motors of ED-7220C. For this reason, a power drive circuit have been designed and implemented to control the six motors. To be able to control the motors in both directions three H-bridges were used to control the six motors of ED-7220C. The first H-bridge used to control the base and gripper, the second controls the shoulder and elbow, and the third is reserved for the wrist. Also; three power supplies are designed to deliver 24V to the motors using variable power supply (LM-380).

    The digital controller Arduino Due was used to read the pulses from encoders channels

    through its interrupt pins. The interrupt pins are faster than the normal digital pins and able to detect the rising edges of the channel (no data is missed).

    The serial port of the Arduino was used to receive data of joints positions gotten from the

    graphical user interface (GUI) that is designed by Processing. Six PI digital controllers are programmed inside the Arduino to assert six control signals to

    the H Bridges using PWM pins (Vss(max)= 3.3v). A bridge of diodes was used to protect the controller from the reversing current from the motors.

    The overall system is introduced in the following figures:

    Figure (4.1): The overall system structure.

    35

  • Results & Discussion Chapter 5 4.2 Modeling the ED-7220C robot in ROS:

    ED-7220C robot arms was 3D modeled in ROS using the Unified Robot Description Format (URDF), the URDF describes the robot as follows:

    Geometry and inertia of the links including the DH parameters. Joint types (revolute, prismatic); maximum velocity and their upper and lower limits. Collision detection and 3D visualization.

    The final 3D model is show in figure (4.2) and links colors identifier are shown in table (4.1).

    Figure (4.2): The 3D model of ED-7220C in ROS.

    Table (4.1): Links colors reference. link Base Shoulder Elbow Wrist Gripper color Black1+blue1+Green Orange Red Black2 Blue2

    4.3 Forward kinematics

    Mathematical modeling and kinematic analysis of robot arm ED-7220C was carried out in this study. ED-7220C was mathematically modeled with Denavit Hartenberg (DH) method; the Forward Kinematics and Inverse Kinematics are applied.

    An initial configuration of robot arm is given in the table (4.2) this configuration was obtained by the Forward Kinematic Window (GUI) after moving the angle sliders. The GUI window is shown in figure (4.3)

    Table (4.2): Initial configuration of ED-7220C.

    Angles in degree

    1 2 3 4 5 0.0 90.0 0.0 90.0 0.0

    36

  • Results & Discussion Chapter 5

    Figure (4.3): Forward Kinematic window for the initial configuration.

    The total transformation matrix of the initial configuration is of the form.

    T50 = 0 1 0 2.501 0 0 0. 0000 00 1 96.0 0 1 (4.1)

    This matrix gives the initial configuration and orientation of the robot arm. The initial configuration is shown in the figure (4.4).

    Figure (4.4): Initial configuration of ED-7220C.

    From the matrix (4.1), we find that the (x, y, and z) position of the end-effector is equal to

    (2.50, 0.00, and 96.0) (cm).

    Changing the joints positions in GUI window by moving the angle sliders to have a configuration rather than the initial one, the GUI of the selected joints positions is shown in Figure (4.5).

    37

  • Results & Discussion Chapter 5

    Figure (4.5): Forward Kinematic window for the selected configuration at position (0.03, -41.58, 19.19).

    The total transformation matrix T50 between the base of the robot arm and the end-effectors is of the form.

    T50(final) = 0 1 0.00 0.030.822 0 0.569 41.580.5690 00 0.822 19.190 1 (4.2)

    From the matrix (4.2), we can find the position of the end-effector equal (0.03, -41.58 and 19.19) (cm). The selected configuration after moving the angle sliders is shown in figure (4.6).

    Figure (4.6): Robot position at (0.03, -41.58, 19.19) in ROS.

    38

  • Results & Discussion Chapter 5

    The results of the total transformation matrix T50 were computed and compared with the measured ones. The Table (4.3) summarizes the computed results and the measured ones.

    Table (4.3): Computed and measured results.

    Position values Values (cm) Measured Values (cm)

    Absolute Error (cm)

    x 0.03 1.00 0.97 y -41.58 -42.5 0.92 z 19.19 20.5 1.31

    The measured (x, y, and z) coordinates of the end-effector position are compared with the

    computed results and listed in table (4.3). The error in z direction is larger than the error in x or y direction. This error is due to two factors:

    The weight of the arm which appears as external torque disturbance in the shoulder, elbow and wrist.

    The gearing system of the shoulder, elbow and wrist introduce backlash that affects the precision of the arm.

    From the equation (2.17), it is noticed that the sum of these errors yield a large error in the

    height (z axes). By experiment; the dead zones are summarized in the following table:

    Table (4.4): Motors dead zone.

    link () shoulder 0.7

    elbow 0.5 wrist 0.5

    4.4 Control design:

    In this work, we have opted for a Proportional plus Integral controller (PI) as cascade compensation. PI controllers are tuned to control the position of the each joint independently following this experimental iterative technique:

    1. Set the Integral and derivative gains to zero. 2. Tune the Proportional gain (Kp) to improve the transient response, neglecting the steady state

    error. 3. Adjust the integral gain (KI) to remove the steady state error which may slightly deteriorate

    the transient performance 4. Redo the previous steps(2-4) until reaching an satisfactory response.

    The satisfactory performance is a zero steady state position error (to the limit of the

    systems characteristics) and small percentage overshoot (critically damped) response from the robot arm controllers. The figure (4.7) and (4.8) show the practical (actual) response of the BASE and WRIST joints acquired from Arduino.

    39

  • Results & Discussion Chapter 5

    40

    Setpoint =50: (Kp =3; Ki=0.05)

    Figure (4.7): The actual response of the base joint to 50 setpoint.

    Setpoint = 100: (Kp = 2; Ki= 0.1)

    Figure (4.8): The actual response of the wrist joint to 100 setpoint.

    The shoulder and wrist PI controllers operate along aside torque disturbance estimator to

    reject the disturbance torque on the joints. The figure (4.9) and (4.10) illustrate the change of the disturbance in function of the link position.

  • Results & Discussion Chapter 5

    Figure (4.10): Shoulder torque rejecter (case1) from 110 to 90.

    Figure (4.11): Shoulder torque rejection (case 2) from 90 to 110.

    For the same operating region (90-110); two estimator and torque rejecter are used since

    it depends also on direction of the movement: For 2= 100:

    In fig (4.10); the torque disturbance is found to be T1=0.3108 corresponding to (110 to

    90) direction. Where in fig (4.11); the torque disturbance is found to be T2=0.1471 corresponding to

    (90 to 100) direction.

    Cancelling the forces acting on the arm by introducing torque cancelation reduces the complexity of the controllers and classical PI controllers are sufficient.

    The T1 and T2 are normalized quantities.

    41

  • Results & Discussion Chapter 5 4.5 Trajectory generation

    Straight-line motions are most common in the industrial applications; however, movement on a line is mostly obtained by specifying the discrete time joint displacements at a constant time rate. The velocity and acceleration of the points can be calculated from the numerical approximation of the time derivatives. Several methods were used to compress the describing data of the trajectories as cubic, quantic and LSPB trajectory.

    The Linear Segment Parabolic Blend is appropriate when a constant velocity is desired along a portion of the path. The LSPB trajectory is such that the velocity is initially ramped up to its desired value and then ramped down when it approaches the goal position.

    The robot arm will be moved to the final position and when that happens we see the LSPB trajectory curves as shown in Figure (4.12). The figure (4.13) is divided into three parts (A, B, and C) and shows the relation between the angle, velocity and acceleration with time.

    A) LSPB trajectory curve of the Base joint position.

    B) LSPB trajectory curve of the Base joint velocity.

    42

  • Results & Discussion Chapter 5

    C) LSPB trajectory curves of the Base joint acceleration.

    Figure (4.12): LSPB trajectory curves of the base

    LSPB of the base joint is constrained as follows: The initial position q0= 0.0 rad correspond to t0 =0.0 s. The final position qf = -1.6 rad at tf =2.77s. V (maximum) = 1 rad /s. The blend time is equal to tb =1.15 .

    The acceleration in the ramp up is found to be:

    = V

    tb= 1

    1.15 = 0.869 (/2) (4.3)

    The complete LSPB trajectory of the base joint is given by:

    () = 0.4345 0 1.152.185 + 1.15 s 1.62 s4.933 + 2.407 0.43452 1.62 s 2.77 s (4.4)

    LSPB is applied in the other joint of the robotic arm as illustrated below.

    43

  • Results & Discussion Chapter 5

    Shoulder joint

    A) LSPB trajectory curve of the shoulder joint position.

    B) The LSPB trajectory curve of the shoulder joint velocity.

    C) LSPB trajectory curve of the shoulder joint acceleration.

    Figure (4.13): LSPB trajectory curves of the shoulder.

    44

  • Results & Discussion Chapter 5

    Elbow joint

    A) LSPB trajectory curve of the elbow joint position.

    B) LSPB trajectory curve of the elbow joint velocity.

    C) LSPB trajectory curve of the elbow joint acceleration.

    Figure (4.14): LSPB trajectory curves of the elbow.

    45

  • Results & Discussion Chapter 5

    Wrist joint

    A) LSPB trajectory curve of the Wrist joint position.

    B) LSPB trajectory curve of the Wrist joint velocity.

    C) LSPB trajectory curve of the Wrist joint acceleration.

    Figure (4.15): LSPB trajectory curves of the wrist.

    46

  • Results & Discussion Chapter 5

    From the graphs, the trajectories have the same:

    The final time tf= 2.77 s. The blend time tb= 1.15 s.

    The blend time is found in the joint that has large position error (initial angle final angle), this can be noticed in the elbow joint where the error is found to be qf =1.745 rad.

    The waypoints pointed in the joint trajectories graphs are summarized in the table below where they are important to execute the generated path in the real robot.

    Table (4.5): Generated trajectory Waypoints.

    Using the robot total transformation (forward kinematic); the waypoints are mapped from the joint space () to the Cartesian space (XYZ) to visualize in 3D. Figure (4.16) shows the resulting path of the arm end-effector.

    waypoints Angles (rad) Cartesian coordinates (cm) X Y Z

    1 0,00 0,00 0,00 0,00 0,00 2,50 0,00 96,00 2 -0,07 -0,04 -0,08 -0,02 -0,07 8,21 -0,58 95,67 3 -0,14 -0,09 -0,16 -0,04 -0,14 13,81 -1,99 94,66 4 -0,22 -0,13 -0,24 -0,06 -0,22 19,00 -4,15 93,01 5 -0,29 -0,17 -0,32 -0,08 -0,29 23,75 -7,01 90,71 6 -0,36 -0,22 -0,40 -0,10 -0,36 27,84 -10,45 87,82 7 -0,43 -0,26 -0,48 -0,12 -0,43 31,14 -14,28 84,42 8 -0,50 -0,31 -0,56 -0,14 -0,50 33,64 -18,47 80,48 9 -0,57 -0,35 -0,63 -0,16 -0,57 35,20 -22,76 76,18 10 -0,65 -0,39 -0,71 -0,18 -0,65 35,88 -27,05 71,48 11 -0,72 -0,44 -0,79 -0,20 -0,72 35,67 -31,16 66,48 12 -0,79 -0,48 -0,87 -0,22 -0,79 34,60 -34,85 61,38 13 -0,86 -0,52 -0,95 -0,24 -0,86 32,74 -38,11 56,10 14 -0,93 -0,57 -1,03 -0,26 -0,93 30,19 -40,73 50,82 15 -1,01 -0,61 -1,11 -0,28 -1,01 27,09 -42,65 45,59 16 -1,08 -0,65 -1,19 -0,30 -1,08 23,57 -43,78 40,49 17 -1,15 -0,70 -1,27 -0,32 -1,15 19,82 -44,05 35,62 18 -1,22 -0,74 -1,35 -0,33 -1,22 15,92 -43,52 31,04 19 -1,29 -0,79 -1,43 -0,35 -1,29 12,07 -42,15 26,82 20 -1,36 -0,83 -1,51 -0,37 -1,36 8,39 -40,00 23,04 21 -1,44 -0,87 -1,59 -0,39 -1,44 5,04 -37,18 19,72 22 -1,51 -0,92 -1,67 -0,41 -1,51 2,16 -33,80 16,96 23 -1,58 -0,96 -1,75 -0,43 -1,58 -0,25 -29,90 14,71

    47

  • Results & Discussion Chapter 5

    Figure (4.16): End effector generated path in XYZ plane.

    The first point in the path corresponds to the initial position of the robotic arm (X=2.5 cm, Y= 0.0 cm and Z=96 cm) and the final position is (X=0.0cm, Y= 29.9 cm and Z=14.71cm).

    The point to point trajectory execution in the real robot with large number of waypoint reduces the deviation error from the generated path.

    5.6 Motion planning

    In this part; the Robotic operating system, Rviz and planning package Moveit (ompl) are used to perform the motion planning task as follows:

    Obstacle description

    ROS uses the Unified Robot Description Format (URDF) to introduce a 3D environment with obstacles.

    Rviz package

    Rviz is simulator integrated in ROS, uses the description of the robot in the URDF to create the 3D model of the robot. Also, Rviz introduces physical parameters such as gravity and friction.

    Moveit package

    Moveit is defined as planner package with different planning algorithms such as (RRT, RPM). Moveit consider the robotic arm as one moving group from the base to the end-effector to avoid problem of link collision by defining collision matrix. Moveit has to be connected to Rviz to allow the selected planning algorithm to detect the obstacles and generate the path.

    The figure (4.17) shows the 3D model of the robotic arm in black color and white box is considered as an obstacle.

    48

  • Results & Discussion Chapter 5

    Figure (4.17): Planned scene with obstacle.

    Pick and place task without obstacle is launched in this section. In figure (4.18) the pick position visualized in (Green) and the place position is visualized in (Orange).

    Figure (4.18): Pick and place position in Rviz.

    Launching the planned path is shown in the figure (4.19) the black trials are the waypoints of the planning.

    Figure (4.19): Waypoints of the planned path in black trial (without obstacle).

    49

  • Results & Discussion Chapter 5

    In this pick and place application; as shown in figure (4.19); only the base joint is moving and the others remain fixed since the pick height is equal to the place height and no obstacles are introduced in the environment.

    Figure (4.20): End-effector curve of the planned path.

    pick and place task with obstacle

    In the same pick and place configuration; a box of dimensions (x=46.0 cm, y=18.0 cm, z= 46 cm) is introduced as an obstacle in the position (X=-42 cm, Y= 0.0 cm, Z = 23 cm) from its center of mass. Using the RRT algorithm; the obstacle collision is detected and a minimum path is constructed from the pick position to the place position as shown in the figure (4.21) where the black trials are the waypoints of the RRT algorithm.

    Figure (4.21): The RRT planned path with obstacle avoidance

    50

  • Results & Discussion Chapter 5 The waypoints generated by the RRT algorithm are used by the LSBP to generate the joint trajectories as described in the trajectory generation and motion planning diagram in figure (4.5). The end-effector generated path is visualized in 3D in figure (4.22).

    Figure (4.22): End-effector curve of the planned path with obstacle avoidance.

    The generated path is executed successfully in the real robot using point to point technique where the waypoints are sent serially through the serial port to the Arduino. Between two waypoints; a time delay (Td) is selected experimentally to get a smooth path and avoiding the discontinuous motion.

    51

  • Conclusion In this project report the ED-7220C robot arm was studied. A complete mathematical model of ED-7220C robot is developed including complete Kinematics analyses of the ED-7220C robot arm. Forward and inverse kinematics equations were derived using Denavit-Hartenberg notation. A manipulator position control system was designed based on the independent joints control. Each joint control system consists of PI cascade compensator plus a feedforward disturbance cancellation controller whenever necessary. By using Robotic Operating System; ED-7220C robot was modeled in 3D. The 3D model was then used to perform path planning for the pick and place task, the paths were generated by sampling-based algorithms in environments with obstacles. These paths were then converted into smooth trajectories using Linear Segment Parabolic Blend. The Graphical User Interface (GUI) was developed for testing motional characteristics of the Robot arm. A physical interface between the ED-7220C robot arm and the GUI was designed and built. A comparison between kinematics solutions of the virtual arm and the robot's arm physical motional behaviors were been accomplished. The execution of the planned trajectories in the real robot was also tested and follow the defined path between the pick and place position in the presence of the obstacles. A future work can be focused on different topics, like the development of different types of controllers such as coupled control by studying the dynamics of the ED-7220C.

    53

  • References

    [1] Mark W. Spong, Seth Hutchinson, M. Vidyasagar,Robot Modeling and Control, 1st edition, PP 187-203.

    [2] O. Bottema and B. Roth, Theoretical Kinematics. Dover Publications, 1990. [3] Mohammed Abu Qassem, Iyad Abuhadrous, Hatem Elaydi, Modeling and Simulation of 5 DOF Educational Robot Arm, The 2nd IEEE International Conference on Advanced Computer Control, Shenyang, 20. Dec. 2009. [4] J. J. Craig, Introduction to robotics: Mechanics and control, Prentice Hall, 3rd edition.

    [5] J. Angeles, Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms, 2nd Edition, Springer, 2003. [6] R. Geraerts, Sampling-based motion planning: Analysis and path quality, Ph.D. dissertation, Utrecht University, 2006. [7] J. Kuffner, S. Kagami, M. Inaba, and H. Inoue, Performance benchmarks for path planning in high dimensions, in JSM Conference on Robotics and Mechatronics, June 2001. [8] G. Wilfong, Motion panning in the presence of movable obstacles, in Proc. ACM Symp. Computat. Geometry, 1988, pp. 279288. [9] R. Alami, T. Simeon, and J.-P. Laumond, A Geometrical Approach To Planning Manipulation Tasks, in 5th Int. Symp. Robot. Res, 1989, pp. 113119. [10] A. DSouza, S. Vijayakumar, and S. Schaal, Learning inverse kinematics, in Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems, 2001. [11] A.Verman, Vivek A. Deshpande, End-effector Position Analysis of SCORBOT-ER Vplus Robot, International Journal of Smart Home, Vol. 5, No. 1, January, 2011 [12] Z. Rymansaib, P. Iravani , M.N.Sahinkaya, Exponential Trajectory Generation for Point to Point Motions, 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) Wollongong, Australia, July 9-12, 2013 [13] Wolfgang A, Pointner, Sampling-based Motion Planning in Theory and Practice, Stanford university research report, May 28, 2013 [14] Byeong-Kyu AhnGazebo, MoveIt!, ros_control , 2nd Open Robotics Seminar December 22, 2014

    54

  • [15] ROS installation: http://wiki.ros.org/indigo/Installation/Ubuntu, February-June 2015. [16] Gazebo website: http://gazebosim.org/tutorials, February-June 2015. [17] ROS Wiki website: http://wiki.ros.org , February-June 2015. [18] MoveIt installation :http://moveit.ros.org/install, , February-June 2015.

    55

  • Appendix

    56

    APPENDIX A: ROBOT HARDWARE AND SOFTWARE A.1 Hardware A.1.1 H-bridge/Drive circuit

    L298 H-bridge is an Integrated circuit used in electronic control of high current load around 4 amperes. These circuits are often used in robotics and other applications to allow DC motors to run forwards and backwards.

    Figure (A.1): L298 IC H-bridge

    Most IC H-bridges can control two motors its arrangement is generally used to reverse the polarity/direction of the motor, but can also be used to 'brake' the motor, where the motor comes to a sudden stop, as the motor's terminals are shorted, or to let the motor 'free run' to a stop, as the motor is effectively disconnected from the circuit. The following figure summarizes operation, with S1-S4 corresponding to the diagram.

    Figure (A.2): H-bridge structure and its working principle

    In our drive circuit three H-bridges were used to control the six motors of ED-7220C. The drive circuit was designed using Proteus software, and then has been printed in double side form cause of the complexity of the circuit. The figure (A.3) illustrates the circuit design in Proteus and its 3D PCB.

  • Appendix

    57

    Figure (A.3): The circuit design in proteus and its 3D PCB

    A.1.2 Arduino Due Microcontroller

    Microcontrollers are designed to interface and interact with electrical/electronic devices, sensors and actuators, and high-tech gadgets to automate systems. Microcontrollers are generally embedded directly into the product or process for automated decision-making. The choice of the Microcontroller is one of the important steps in this project. Arduino Due microcontroller was used and found to be very adequate.

  • Appendix

    Figure (A.4): Arduino Due microcontroller

    A.1.2.1 Arduino Due Summary

    Microcontroller AT91SAM3X8E Operating Voltage 3.3V Input Voltage (recommended) 7-12V Input Voltage (limits) 6-16V Digital I/O Pins 54 (of which 12 provide PWM output) Analog Input Pins 12 Analog Outputs Pins 2 (DAC) Total DC Output Current on all I/O lines 130 mA DC Current for 3.3V Pin 800 mA DC Current for 5V Pin 800 mA Flash Memory 512 KB all available for the user applications SRAM 96 KB (two banks: 64KB and 32KB) Clock Speed 84 MHz Length 101.52 mm Width 53.3 mm Weight 36 g A.1.3 ED-7220C Robot Arm

    ED-7220C has six DC motors of type DME38B50G rated at 24V and equipped with a gearbox with a ratio of 50/1. The motors have quadrature Encoders which are used as feedback sensors to the Arduino to update the Controller with current state position. The table (A.5) illustrates different reading of joints encoders when move from limit to limit.

    Table (A.5): Readings of joints encoders Joint Base Shoulder Elbow Wrist Gripper

    Enc.counts 0-15000 0-3700 0-3400 0-4500 0-1500

    58

  • Appendix

    The workspace of ED-7220C arm expresses its ability to reach a specific area. So it is defined by the space that the robot can reach. Given the information about the range of motion for each joint of the robot the workspace can be determined. The workspace of ED-7220C is illustrated in figure (A.5) and the range of motion in table (A.6).

    Figure (A.5): a. Workspace in XY b. Workspace in XY c. Workspace in 3D

    Table (A.6): ED-7220C range of motion Base Shoulder Elbow Wrist pitch Wrist Roll

    -150/150 35/110 -10/150 -130/130 -180/180

    A.2 Software

    Software environment can be divided into two parts: the Arduino microcontroller program and Processing Program. In Arduino program, we wrote a code to make the interfacing and control of ED-7220C arm. The Processing program consists of the Serial Communication code and the graphical user interface (GUI) that is used to enter commands. The GUI for ED-7220C robotic arm control was written in Processing program; Processing is a powerful software package that allows for interactive programs with 2D, 3D, or PDF output. The program consists of forward kinematic window (GUI) as shown in figure (A.7).

    59

  • Appendix

    Figure (A.7): Forward Kinematic Window

    60

    Front page template (Master-Computer_Engineering)DedicationACKNOWLEDGEMEN&ABSTRACTTTableFigureTables0Introduction1Chapter_1 Robot kinematics2Chapter_2 Robot Dynamics And Control3Chapter_3 Robot Trajectory & Motion Planning4Chapter_4 Results and DiscussionConclusion2ReferencesAPPENDIX A