Top Banner
Configuration of Slave and Endoscope in Surgical Robot based on Brain Activity Measurement Satoshi Miura, Yo Kobayashi, Masatoshi Seki, Yasutaka Nakashima, Takehiko Noguchi, Yuki Yokoo and Masakatsu G. Fujie Faculty of Science and Engineering Waseda University Tokyo, Japan [email protected] Kazuya Kawamura Graduate School and Faculty of Engineering Chiba University Tokyo, Japan [email protected] AbstractSurgical robot has been considerable improvement in recent years, but their intuitive operability, representing user interoperability, has yet to be quantitatively evaluated. Thus, we propose a method for measuring brain activity to determine intuitive operability so as to design a robot with intuitive operability. The objective of this paper is to determine the angle and radius between the endoscope and the manipulator that allows users to perceive the manipulator as part of their body. In the experiments, subjects moved the hand controller to position the tip of the virtual slave manipulator on the target in the surgical simulator, measured the brain activity through brain imaging devices. The experiment was carried out a number of times with the virtual slave manipulator configured in a variety of ways. The results show that brain activation is significantly greater with a particular slave manipulator configuration. It concludes that the handeye coordination between the body image and the robot should be closely matched in the design of a robot with intuitive operability. Keywords-medical robotics; surgical robot; intuitive operability; user interface; brain activity I. INTRODUCTION A. Background Robotic surgery has undergone considerable improvement in recent years [1]. Such surgery has the potential to reduce scarring and shorten recovery times [2]. The method of operation in surgical robots often comprises a masterslave system where the surgeon controls the master manipulator, while the slave manipulator performs the operation instead of the surgeons own arm. It has often been stated that robotic surgery is more difficult than open surgery because the surgeon needs to ascertain the internal condition and operate indirectly using a monitor [1]. Therefore, robotic surgical machines need intuitive operability as if the surgeon were operating directly as in open surgery. This intuition is an indirect surgical approach as known tele-presence[3]. In related work, intuitive operability has been evaluated using a working score, such as the time to complete a task, average speed, movement curvature, or a combination of the above under test conditions [4]. However, quantitative conditions that facilitate user control as expected by the user have not been clarified because there is no defined method for measuring user interoperability. As such, developers have created surgical robots based on vague motivations, ignoring the possible realization of intuitive operability. B. Motivation We propose a measurement method for user interoperability that measures brain activity through brain imaging devices [5]. We can investigate brain activation compared with the recognition function reported to measure user introspection. We focus on medical robotics because greater tele-presenceis required owing to the need for dexterous manipulation in surgery. Our goal is to develop a surgical robot system that allows the surgeon to perceive the manipulator as part of his body, which is evaluated by measuring brain activity as shown in Fig. 1. A masterslave system is too indirect to allow users to move as expected. The most direct feedback system is where the robot is an integral part of the surgeons body. Therefore, we define intuitive operability by how strongly the user perceives the manipulator to be part of his body. We consider the brain measurement data to be useful in the design of a robot with intuitive operability. The originality of this study is related to the proposed method using brain imaging techniques to determine the parameters for the design of a robot with intuitive operability. This work was supported in part by the Global COE (Centers of Excellence) Program “Global Robot Academia” from the Ministry of Education, Culture, Sports, Science and Technology in Japan, and in part by Grant Scientific Research (A) (23240081) and Grant Scientific Research (B) (22360108), Japan. Figure 1. Proposed method. Brain activity of the subject is measured as he controls the hand-controller to position the tip of the virtual arm on the target. Surgical robot design is then evaluated using this brain activity measurement. Optical Topography Hand-Controller Brain Hb Time [sec] Oxy Hb [mM/mm] Task Rest
6

Configuration of slave and endoscope in surgical robot based on brain activity measurement

Apr 01, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Configuration of slave and endoscope in surgical robot based on brain activity measurement

Configuration of Slave and Endoscope in Surgical

Robot based on Brain Activity Measurement

Satoshi Miura, Yo Kobayashi, Masatoshi Seki,

Yasutaka Nakashima, Takehiko Noguchi,

Yuki Yokoo and Masakatsu G. Fujie

Faculty of Science and Engineering

Waseda University

Tokyo, Japan

[email protected]

Kazuya Kawamura

Graduate School and Faculty of Engineering

Chiba University

Tokyo, Japan

[email protected]

Abstract—Surgical robot has been considerable improvement in

recent years, but their intuitive operability, representing user

interoperability, has yet to be quantitatively evaluated. Thus, we

propose a method for measuring brain activity to determine

intuitive operability so as to design a robot with intuitive

operability. The objective of this paper is to determine the angle

and radius between the endoscope and the manipulator that

allows users to perceive the manipulator as part of their body. In

the experiments, subjects moved the hand controller to position

the tip of the virtual slave manipulator on the target in the

surgical simulator, measured the brain activity through brain

imaging devices. The experiment was carried out a number of

times with the virtual slave manipulator configured in a variety

of ways. The results show that brain activation is significantly

greater with a particular slave manipulator configuration. It

concludes that the hand–eye coordination between the body

image and the robot should be closely matched in the design of a

robot with intuitive operability.

Keywords-medical robotics; surgical robot; intuitive operability;

user interface; brain activity

I. INTRODUCTION

A. Background

Robotic surgery has undergone considerable improvement in recent years [1]. Such surgery has the potential to reduce scarring and shorten recovery times [2].

The method of operation in surgical robots often comprises a master–slave system where the surgeon controls the master manipulator, while the slave manipulator performs the operation instead of the surgeon’s own arm. It has often been stated that robotic surgery is more difficult than open surgery because the surgeon needs to ascertain the internal condition and operate indirectly using a monitor [1]. Therefore, robotic surgical machines need intuitive operability as if the surgeon were operating directly as in open surgery. This intuition is an indirect surgical approach as known “tele-presence” [3].

In related work, intuitive operability has been evaluated using a working score, such as the time to complete a task, average speed, movement curvature, or a combination of the above under test conditions [4]. However, quantitative

conditions that facilitate user control as expected by the user have not been clarified because there is no defined method for measuring user interoperability. As such, developers have created surgical robots based on vague motivations, ignoring the possible realization of intuitive operability.

B. Motivation

We propose a measurement method for user interoperability that measures brain activity through brain imaging devices [5]. We can investigate brain activation compared with the recognition function reported to measure user introspection. We focus on medical robotics because greater “tele-presence” is required owing to the need for dexterous manipulation in surgery.

Our goal is to develop a surgical robot system that allows the surgeon to perceive the manipulator as part of his body, which is evaluated by measuring brain activity as shown in Fig. 1. A master–slave system is too indirect to allow users to move as expected. The most direct feedback system is where the robot is an integral part of the surgeon’s body. Therefore, we define intuitive operability by how strongly the user perceives the manipulator to be part of his body. We consider the brain measurement data to be useful in the design of a robot with intuitive operability. The originality of this study is related to the proposed method using brain imaging techniques to determine the parameters for the design of a robot with intuitive operability.

This work was supported in part by the Global COE (Centers of Excellence) Program “Global Robot Academia” from the Ministry of

Education, Culture, Sports, Science and Technology in Japan, and in part by

Grant Scientific Research (A) (23240081) and Grant Scientific Research (B) (22360108), Japan.

Figure 1. Proposed method. Brain activity of the subject is measured as he controls the hand-controller to position the tip of the virtual arm on the target. Surgical robot design is then evaluated using this brain activity measurement.

Optical Topography

Hand-Controller

Brain Hb

Time [sec]

Oxy

Hb

[mM

/mm

]

Task Rest

Page 2: Configuration of slave and endoscope in surgical robot based on brain activity measurement

C. Objective

In this study, we investigate hand–eye coordination that allows the user to regard the manipulator as part of his own body. Typically, hand–eye coordination is influenced by the difference between the hand positions of the human and robot relative to the eye. In endoscopic surgery, the hand–eye position is referred to as “triangulation” [6]. Moreover, as indicated by earlier reports, hand–eye coordination has an effect on the difference between visual and somatic feedback [7] [8] [9].

The objective of this study is to clarify the appropriate hand-to-eye position based on the user’s recognition function. We define the hand-to-eye position by the angle and radius between the endoscope and position of the arm (Fig. 2). The angle determines the point from which the arm is shown, while the radius represents how much of the arm is shown. Because a user substitutes the endoscope and robot arm for his own eyes and arm, the angle and the radius between the endoscope and robot arm instead of between his eyes and arm determine to what extent the human and robot match.

More specifically, we propose an angle and radius that make it easier to perceive the manipulator as being part of the user’s body, based on the brain activity measured by the brain imaging device while the user controls the hand-controller to position the tip of the virtual arm on the target in the robotic surgical simulator. Brain activity increases as the user’s perception that the manipulator is part of his body grows, and this can be used to determine whether the user can control the robot intuitively.

II. METHOD

A. Experimental Setup

We used a brain imaging device to measure the brain activity of the subjects while controlling the virtual arm via a hand controller in the surgical simulator as shown in Fig. 1.

1) Brain Imaging Device – Optical Topography

For the brain imaging device, we used optical topography (OT) because OT equipment is reasonably compact and its

measurement is relatively non-invasive, which enables the monitoring of human cortical activity in a variety of experimental tasks, such as those requiring body movement [10]. OT is a relatively new brain imaging technique that measures the relative change in the hemoglobin concentration [11]. Moreover, changes in the concentration of oxygenated hemoglobin (oxy-Hb) and deoxygenated hemoglobin (deoxy-Hb) can be measured separately. The more brain activated, the higher Oxy-Hb shows.

A 22-channel OT instrument (ETG-4000, Hitachi Medical Co.) generating two wavelengths of OT light (635 and 830 nm) was used to measure temporal changes in the concentrations of oxy-Hb and deoxy-Hb with a temporal resolution of 0.1 s.

We used a 3 × 5 matrix of photodiodes consisting of eight light transmitters and seven receivers for the measurement, as shown in Fig. 3. The blood oxygen level was measured in a 30-mm area between each transmitter and receiver pair. Thus, the 15 photodiodes formed 22 measurement channels. These photodiodes were attached to a flexible silicon frame and placed over the subject’s parietal area. The photodiodes covered a 9 × 15 cm area of the scalp, including Pz, according to the international 10/20 system [12].

The brain area we measured was the intraparietal sulcus (IPS), which is located on the lateral surface of the parietal lobe and consists of an oblique and a horizontal section. The IPS contains a series of functionally distinct subregions that have been intensively investigated using both single cell neurophysiology in primates [13] [14] and human functional neuroimaging [15]. The principal functions of this area are related to perceptual-motor coordination (for directing eye movement and reaching) and visual attention [16]. To identify which channel on the subject’s head corresponds to the brain area, we used a 3D-digitizer and NIRS-SPM software to reveal that the IPS is above channel 6. The 3D-digitizer was used to measure where the point is located on the head, while the NIRS-SPM software for MATLAB validated the standard brain model and data from the 3D-digitizer. Our evaluation yielded channel 6. The channel was not affected by other channels. That is, Oxy-Hb on channel 6 shows the brain activity on IPS which indicates how easy it is to perceive the virtual arm as part of his body.

2) Robotic Surgery Simulator

The robotic surgery simulation was presented on a 19-inch LCD monitor (Dell 1905FPe) with pixel resolution of 1280 × 1024 and vertical refresh rate of 60 Hz. The monitor could not

Figure 2. Objective parameters. The hand-to-eye position is determined by the angle and radius between the endoscope and position of the arm, which are the objective parameters in this study.

Figure 3. Cap position and arrangement of the 15 photodiodes and location of the 22 measurement channels (right). The red and blue squares show the OT light transmitters and receivers, respectively. The numbers indicate the channels. We evaluated the oxy-Hb concentration for channel 6 during a single session.

Angle

Radius

Direct

Rotate

Endoscope

Slave

Pz

1 2 3 4

5 6 7 8 9

14

10 11 12 13

15 16 17 18

19 20 21 22

Pz

Transmitter

ReceiverFront

L R

Page 3: Configuration of slave and endoscope in surgical robot based on brain activity measurement

display 3D images because wearing 3D glasses would have annoyed the subjects, causing their brain activity to be influenced by other factors. The time course of the stimulus presentation was controlled by a PC. In the experiment, the subject observed the monitor from a viewing distance of 600 mm with his head resting on a chin-rest.

The simulator displays a green cube and the slave manipulator coupled with the hand controller against a black background. Black was chosen for the background because this color places the least strain on human eyes. The green cube is placed randomly in the same plane when touched by the tip of the virtual slave manipulator. Green is used for the cube because this color stands out best against the black background.

We implemented the robotic surgical simulator (Fig. 4) in C using the Open Graphics Library (OpenGL). The virtual slave manipulator is controlled by the hand controller, a PHANTOM-Omni® [18]. The virtual slave manipulator has 3-DOF: rotation with 2-DOF (yaw and roll) and direct action with 1-DOF (z-axis). The tip of the virtual slave manipulator corresponds with the tip of the hand controller. The mechanism for the virtual manipulator is illustrated in Fig. 4. No haptic information was fed back to the subjects during the experiments.

During the experiments involving all the arm configurations, the hand controller was positioned at the same location. In addition, the subjects used the same grip, similar to holding a pen, to control the hand controller. By varying only the arm position, we were able to evaluate the relation between hand–eye coordination and brain activity.

B. Experimental Conditions

1) Subjects Four healthy adults (three males and one female; mean age

23.25 years and range 22-25 years; three right-handed and one left-handed) participated in the experiment. All had normal or corrected-to-normal vision. The subjects were informed about measuring their brain activity and the purpose of the experiment and informed consent was obtained from all the subjects. The experiments were conducted in accordance with the Declaration of Helsinki.

2) Visuomotor Task In the experiment, the subject controlled the hand controller,

a PHANTOM-Omni®, to position the tip of the virtual manipulator repeatedly on the target in the surgical simulator. The target was a green box that reappeared randomly in the same plane when touched by the tip of the manipulator. The reason the target appeared in the same plane was because the

monitor could not display 3D images. The experiments involved the subjects repeatedly positioning the tip on a randomly placed target.

At the beginning of a single measurement session, the tip of the hand-controller’s stylus was used to set the base, synchronizing with the tip of the virtual arm placed behind the green box.

The visuomotor task in which a subject positions the arm tip on the target is appropriate for this study because touch allows a user to perceive the boundary between what is internal and external to the body, providing the greatest opportunity to perceive his own body [16] [19] [20]. Making the users repeatedly touch the target stimulates perception of their bodies, and we can measure how strongly a user perceives the manipulator as being part of his body.

3) Virtual Arm Setting We conducted tests using a total of nine different virtual

arm configurations, consisting of a combination of three different angles and three radius scales. The three angles selected were -90°, 0°, and 90°, with the three radius scales of ×0.75, ×1.00, and ×1.25. Here, the radius scale ×1.00 is defined as the hand-to-eye position of humans. In Fig. 5, the matrix shows the nine virtual arm configurations used to determine the best combination of angle and radius scale from the three options given for each.

4) Stimulation Measurement Mode We measured the brain activity during the stimulation

measurement mode. The measurement interval was divided into two periods, that is, task and rest periods, to ensure high precision analysis by integrating the brain activity observed during each task period.

During the rest period, only the green box was visible. However, subjects were instructed to remain focused from the beginning to the end of each session.

During the task period, subjects controlled the hand-controller to position the tip of the virtual arm on the target in the simulator using nine different virtual arm positions for each of three angles and radius scales interspersed between rest periods. The subjects continually attempted to position the tip on the target, a green box that was randomly repositioned when touched by the tip of the virtual slave manipulator.

Figure 4. DH skeleton [19]. The virtual manipulator has 3-DOF: rotation with 2-DOF (yaw and roll) and direct action with 1-DOF (z-axis).

Figure 5. Variations in virtual arm configuration. The subjects controlled the virtual arm under nine different variations.

y

z

d

l

Roll φ

Yaw ψ

Angle [deg]

Rad

ius

scal

e

-90 0 90

x1.25

x1.00

x0.75

Page 4: Configuration of slave and endoscope in surgical robot based on brain activity measurement

A single measurement session consisted of an initial 30-s rest period followed by a repetition of nine stimulus sequences, consisting of a 30-s task period followed by a 40-s rest period.

During the measurement session, subjects tried to maintain their posture by minimizing body movement. The initial rest period was sufficient to steady brain activity. The order of stimulation was randomized for each subject based on the three rules of Fisher in his work on experimental design [21].

5) Procedure The investigator set up the OT probe positions and postures,

and then continually reset each probe until all the OT channels had been measured. After completing the probe settings, the experiment started. Each subject participated in a single measurement session.

In the experiment, the subjects used the hand controller, a PHANTOM-Omni®, to position the tip of the virtual manipulator in the surgical simulator during both the task and rest periods. The two periods differed with respect to the angle and radius scale between the eye and virtual arm because, in our opinion, the most valuable measurement is the difference in success rate between the task and rest periods.

At the beginning of each period, the cube was positioned in the center of the display and the tip of the virtual arm was synchronized with the tip of the stylus. The subjects kept hold of the stylus continuously so as not to allow other factors to influence brain activity.

C. Data Analysis

We corrected the raw data using the following procedures. First, the raw data were digitally low-pass filtered at 0.1 Hz to remove measurement noise. Next, a baseline correction was performed to remove the linear trend in hemoglobin concentration. We fitted a linear function to the data points sampled in the 5-s interval before and after the onset of each task period. We then performed a baseline correction using the mean values from half of the rest period as the baseline.

As the raw NIRS data consist of relative values, we cannot compare the data of subjects or channels directly. Therefore, the data were normalized by calculating effect sizes for each channel within the subjects [10]. The effect sizes (d) were calculated by the following equation:

smmd /)21( (1)

where m1 is the mean value for a stimulation period, and m2 and s are, respectively, the mean and standard deviation of the values sampled during the 10-s period before stimulation. We used the effect size in all subsequent analyses.

III. RESULTS

A. Consideration of Evaluation Indices

To validate a user’s recognition measurement, we first investigated quantitative indices to evaluate the ease with which the manipulator is perceived as being part of the user’s body. The activation map during a task period is shown on the left in Fig. 6, while an example of the changes in oxy-Hb concentrations over time during a task period is shown on the right of the figure. The vertical line is Oxy-Hb mM/mm, on the other hand, horizontal line is time s. The results confirm that a significant increase in oxy-Hb concentration is found in the IPS area during a task period, whereas a decrease in oxy-Hb concentration is measured during the rest periods.

We used the oxy-Hb concentration as a quantitative value indicating how easy it is for the user to perceive the virtual arm as part of his body. We subsequently used the oxy-Hb values to validate the angle and radius scale between the eyes and arm that induces users to perceive the virtual arm as being part of their body.

B. Results for all Angles and Radius Scales between the Eyes

and Virtual Arm Position

We evaluated the effect sizes for brain activation against the manipulator position using varying angles and radius scales. According to the results, four subjects showed significant peaks, as illustrated in Fig. 7. In Fig.7, the vertical line is effect size of Oxy-Hb. On the other hand, the plane is slave configuration for endoscope shown as angle and radius scale. The effect size is different scale among subjects because Oxy-Hb is relative value in a subject. We examined the head-count distribution by manipulator position for these to confirm where the significant peaks were located, as illustrated in Fig. 7.

IV. DISCUSSION

A. Validation of the Obtained Angle and Radius

The results of the activation map in Fig. 6 indicate that channel 6 is the correct position for the IPS activation measurement because activation of channel 6 was significant during the task period. The trend in Fig. 6 suggests that IPS can be used to reflect the user’s introspection for physical human–robot correspondence because the activity is significant during the task period, whereas it is small in rest periods. As such, the activation in the task period is more significant than that in rest periods because users perceive the virtual arm to be part of their body in task periods and not in rest periods, thus indicating the user’s introspection.

The effect sizes for the brain activity show various significant peaks as illustrated in Fig. 7. A significant peak in brain activity indicates the hand–eye position that makes it easier to perceive the manipulator as part of the user’s body. In other words, the hand–eye position yielding the most significant peak in brain activity suggests the most intuitive operability.

Figure 6. Activation map showing that activity in channel 6 is significant (left) and changes over time in the oxy-Hb concentrations in channel 6 during a task (right).

Time [sec]

Ox

y H

b[m

M/m

m]

Task Rest

Page 5: Configuration of slave and endoscope in surgical robot based on brain activity measurement

We examined where the manipulator was located when the four subjects showed a peak in brain activity (Fig. 7). Three subject’s brain activity peaked at the manipulator position with angle = 0° and radius = ×0.75. In this case, the arm position is similar to that when a human closely examines a near object. A possible interpretation of this is that these subjects associated the manipulator with their hand, which was closely observed, as illustrated in Fig. 8. Brain activity was significant when controlling the manipulator positioned close at an angle of 0°.

Moreover, all the subjects rotated their ankles to correspond to changes in the arm position because the posture of the stylus matches the posture of the virtual arm. For example, in the arm position facing upwards towards the center of the display, e.g., at an angle of 45°, the subject holds the stylus like a pen. On the other hand, with the arm positioned downwards towards the center of the display, e.g., at an angle of -45°, the subject tends to bend his wrist to adapt the stylus posture to fit the virtual arm posture. Thus, ankle rotation is one of the best methods for adapting the robot control.

B. Body Image

Our investigation of hand–eye coordination in this research clarified the angle and the radius between the eye and hand that best induces users to perceive the manipulator as part of their body. Hand–eye coordination represents visual and somatic sense feedback. It has been stated that the correspondence

between visual and somatic sense feedback includes the human body image, which is the spatial body symbol in the mind [16] [22] [23][24] . Body image is one of the internal models [22].

When a person uses a tool, he makes use of body image by perceiving the tool as a part of his body [16]. If a person has no such image, he cannot use the tool correctly because the tool is not part of his body. By visualizing the tool to be part of his body, the user derives intuitive operability and can master the tool.

We believe that body image most affects intuitive operability including hand–eye coordination because the essence of intuitive operability is the interface between the human and the robot, as shown in Fig. 9. It would thus be beneficial to match the robot with the human body image when designing a robot with intuitive operability. To clarify the quantitative condition that most induces users to perceive the manipulator as part of their body, we should investigate factors of the body image such as a mismatch in the posture of the master–slave manipulator.

V. CONCLUSION

In this paper, we aimed to clarify the angle and radius between the endoscope and the manipulator that best induces users to perceive the manipulator as part of their body based on the brain activity measurement. In the experiment, users moved the hand controller, a PHANTOM-Omni®, to position the tip

Figure 8. Match between the robot–human bodies. The manipulator position with angle = 0° and radius scale = ×0.75 corresponds with that of a human looking closely at an object nearby.

Figure 7. Results of brain activity for each slave configuration (angle, radius scale) against the endoscope.

Figure 9. Ideal interface that allows users to perceive the surgical robot as part of their body.

Embodiment of

Surgical robot

Page 6: Configuration of slave and endoscope in surgical robot based on brain activity measurement

of the virtual arm on the target in the surgical simulator, while their brain activity was measured with an OT brain imaging device. We used nine measurement scenarios for the arm position by combining each of three angles rotated 90° around the eye focus position with three radius scales between the eye focus and the manipulator position based on the distance of the eye-hand position in humans. We validated the evaluation in each case according to brain activity. From the results, brain activity was significant when the manipulator was positioned close by or immediately to the right. A possible interpretation of these results is that the subjects associated the manipulator with their own arm, which was observed close by. The results indicate that body image affects hand–eye coordination, which is related to visual and somatic sense feedback. In the future, we intend to investigate various factors related to body image, such as a mismatch in the posture of the master–slave manipulator.

REFERENCES

[1] J. Leven, D. Burschka, R. Knmar, M. Choti, C. Hasser, and R. H. Taylor, Da Vinci Canvas: a telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability, Medical Image Computing and Computer-Assisted Intervention (MICCAI) 3749, 811-818 (2005)

[2] T. Osa, C. Staub, and A. Knoll, Framework of automatic robotic surgery system using visual servoing, in Proc. 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan (2010)

[3] G. H. Ballantyne, Robotic surgery, telerobotic surgery, telepresence, and telementoring - review of early clinical results, Surg. Endosc. 16, 1389-1402 (2002)

[4] I. Suh, M. Mukherjee, D. Oleynikov, and K-C. Siu, Training program for fundamental surgical skill in robotic laparoscopic surgery, Int. J. Med. Robotics Computer Assist. Surg. 7, 327-333 (2011)

[5] Satoshi Miura, Yo Kobayashi, Masatoshi Seki, Takehiko Noguchi, Masahiro Kasuya, Yuki Yokoo and Masakatsu G. Fujie, “Intuitive Operability Evaluation of Robotic Surgery Using Brain Activity Measurement to Identify Hand-Eye Coordination”, in Proceedings of 2012 IEEE International Conference on Robotics and Automation (ICRA’12), 4546-4552, St. Paul, MN, USA, 14-18 May 2012

[6] A. N. Fader and P. F. Escobar, Laparoendoscopic single-site surgery (LESS) in gynecologic oncology: technique and initial report, Gynecologic Oncology 114, (2009)

[7] M. Miyazaki, M. Hiroshima, and D. Nozaki, The Cutaneous Rabbit Hopping out of the Body, J. Neurosci. 30(5), 1856-1860 (2010)

[8] H. Imamizu, S. Miyauchi, T. Tamada, Y. Sasaki, R. Takino, B. Pz, T. Yoshioka, and M. Kawato, Human Cerebellar Activity Reflecting an Acquired Internal Model of a New Tool, Nature 403, 192-195, (2000

[9] D. M. Clower and D. Boussaound, Selective Use of Perceptual Recalibration versus Visuomotor Skill Acquisition, J. Neurophysiol. 84, 2703 – 2708 (2000)

[10] S. Taya, G. Maehara, and H. Kojima, Hemodynamic changes in response to the stimulated visual quadrants: a study with 24-channel near-infrared spectroscopy, Jpn. J. Psychonomic Sci. (2009)

[11] G. Maehara, S. Taya, and H. Kojima, Changes in hemoglobin concentration in the lateral occipital regions during shape recognition: a near-infrared spectroscopy study, Journal of Biomedical Optics 12(6), 062109 (2007)

[12] R. W. Human, J. Herman, and P. Purdy, Cerebral location of international 10-20 system electrode placement, Electroen. Clin. Neurophysiol. 66, 376-382 (1987)

[13] C. E. Colby and M. E. Golberg, Space and attention in parietal cortex, Annu. Rev. Neurosci. 12, 319-349 (1999)

[14] R. A. Anderson, Visual and eye movement functions of the posterior parietal cortex, Annu. Rev. Neurosci. 12, 377-403 (1989)

[15] J. C. Culham and N. G. Kanwisher, Neuroimaging of cognitive functions in human parietal cortex, Current Opinion in Neurobiology 11, 157-163 (2001)

[16] A. Maravita and A. Iriki, Tools for the body, Trends in Cognitive Sciences 8(2), 79-86 (2004)

[17] V. J. Santos, and F. J. Valero-Cuevas, Reported Anatomical Variability Naturally Leads to Multimodal Distributions of Denavit-Hartenberg Parameters for the Human Thumb, IEEE Trans. Bio-med. Eng. 53(2), 155-163 (2006)

[18] SensAble Technology, Inc. Available: http://www.sensable.com/haptic-phantom-omni.htm

[19] H. Head and G. Holmes, Sensory disturbances from cerebral lesions, Brain 34, 102-245 (1911)

[20] J. Paillard, The Use of Tools by Human and Non-human Primates, Oxford University Press, New York (1993)

[21] R. A. Fisher; J. H. Bennett (ed.), “Statistical methods, experimental design, and scientific inference”, Oxford Univ. Press, (1990).

[22] C. Nabeshima, Y. Kuniyoshi, and M, Lungarella, Adaptive Body Scheme for Robotic Tool-Use, Advanced Robotics 20(10), 1105-1126 (2006)

[23] M. Hoffman, H. G. Marques, A. H. Arieta, H. Sumioka, M. Lungarella, and R. Pefeifer, Body Scheme in Robotics: a Review, IEEE Trans. Autonomous Mental Development 2(4), 304-324(2010)

[24] E. Cassirer, Philosophie der symbolischen Formen, 1923-1929 (1923)