Top Banner
A laparoscopic surgery training interface Sandro F. Queirós 1 , João L. Vilaça 1,2 , Nuno F. Rodrigues 2,3 , Sara C. Neves 1 , Pedro M.Teixeira 3 and Jorge Correia-Pinto 1 1 Life and Health Sciences Research Institute, University of Minho. 4710-057 Braga, Portugal; 2 DIGARC, Polytechnic Institute of Cávado and Ave. 4750-810 Barcelos, Portugal; 3 DI-CCTC, University of Minho. 4710-057 Braga, Portugal. Abstract - Laparoscopy is a surgical procedure on which operations in the abdomen are performed through small incisions using several specialized instruments. The laparoscopic surgery success greatly depends on surgeon skills and training. To achieve these technical high-standards, different apprenticeship methods have been developed, many based on in vivo training, an approach that involves high costs and complex setup procedures. This paper explores Virtual Reality (VR) simulation as an alternative for novice surgeons training. Even though several simulators are available on the market claiming successful training experiences, their use is extremely limited due to the economic costs involved. In this work, we present a low-cost laparoscopy simulator able to monitor and assist the trainee’s surgical movements. The developed prototype consists of a set of inexpensive sensors, namely an accelerometer, a gyroscope, a magnetometer and a flex sensor, attached to specific laparoscopic instruments. Our approach allows repeated assisted training of an exercise, without time constraints or additional costs, since no human artificial model is needed. A case study of our simulator applied to instrument manipulation practice (hand-eye coordination) is also presented. Keywords-surgical training; laparoscopy; serious games I. INTRODUCTION Laparoscopy is a minimally invasive surgery (MIS) technique performed in the abdomen or pelvis through small incisions (usually between 0.5 and 1.5 cm), through which a camera and surgical instruments are inserted in order to inspect and diagnose the patient medical condition or perform surgery procedures. When compared to open surgical procedures, laparoscopic approaches offer several advantages, such as (a) reduced hospitalization time, (b) minimal postoperative pain, which leads to a decreased demand of painkillers, (c) reduced exposure of internal organs to possible external contaminants, thereby reducing risk of infections, and (d) minimal functional and aesthetic damage [1,2]. However, indirect access to some internal structures makes this kind of procedures complex to perform, requiring surgeons to develop quite accurate technical skills, such as camera navigation, hand-eye coordination, gesture accuracy, precision and speed. The mastering of these techniques are essential for laparoscopic surgery practice, even for performing simple surgical operations such as translocation, cutting, dissection, grasping and suturing. In the early days of laparoscopic surgery, the acquisition of these skills was mainly achieved by repeatedly practicing within the operating theatre, under the guidance of an experienced surgeon. As a result, laparoscopy was associated with high rates of complications, particularly during surgeons’ early experience [3]. To mitigate these consequences, animal models, such as pigs and rabbits, start being used, but not without introducing some further problems, such as the concern about infectious diseases transmission [4] and a significant costs increase due to the amount of animals needed to practice with. All the above mentioned constraints in laparoscopy training, associated with ethical, medical-legal and educational considerations, urged the research for better approaches to accommodate the need for more and improved training, ultimately leading to the appearance of simulators. For many years, virtual reality (VR) technology has been successfully used for training in the aviation industry, as well as in other high reliability work environments [5,6]. Medicine practice is an example of such environments where VR is being applied with increasing positive results. Some have already proven the benefits of using VR simulators in laparoscopic surgery [7-10], contributing to an improvement on current surgeons surgical performance. So far, some simulators are being commercialized, such as Lap Mentor, ProMIS, LapSim, SimSurgery, LAGB or LTS3e. If one abstracts from all the technical details that differentiate these simulators, they all provide VR frameworks that allow training of skills, knowledge and judgment for laparoscopic surgery [9,11-13]. Additionally, these simulators present a haptic interface, offering tactic and force feedback, which is important to mimic the surgical scenario [14]. As a downside, such interfaces are usually expensive, making them unsuitable for small and medium size hospitals or for private use. To overcome economic limitations, others claim to have achieved low-cost systems [15]. However, these are not truly VR solutions since there is little or no virtual simulation involved and imply continuous additional costs, in particular for purchasing physical human artificial models for surgical training purposes. In [16], Jambon et al. presented a low-cost alternative, using potentiometer sensors for motion detection. Although presenting some similarities to our work, the authors focused their study in gynecologic laparoscopy, and the system is technologically out-dated regarding VR simulation and graphical realism. Sokollik et al. [17] presented a 3D ultrasound measurement system for motion capture, capable of determining the laparoscopic instruments position from spatial coordinators of small ultrasound transmitters. Using an active tracking system, Chmarra et al. [18] presented a
7

A laparoscopic surgery training interface

Mar 07, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A laparoscopic surgery training interface

A laparoscopic surgery training interface

Sandro F. Queirós1, João L. Vilaça1,2, Nuno F. Rodrigues2,3, Sara C. Neves1, Pedro M.Teixeira3 and Jorge Correia-Pinto1

1Life and Health Sciences Research Institute, University of Minho. 4710-057 Braga, Portugal; 2DIGARC, Polytechnic Institute of Cávado and Ave. 4750-810 Barcelos, Portugal;

3DI-CCTC, University of Minho. 4710-057 Braga, Portugal.

Abstract - Laparoscopy is a surgical procedure on which

operations in the abdomen are performed through small

incisions using several specialized instruments. The

laparoscopic surgery success greatly depends on surgeon skills

and training. To achieve these technical high-standards,

different apprenticeship methods have been developed, many

based on in vivo training, an approach that involves high costs

and complex setup procedures. This paper explores Virtual

Reality (VR) simulation as an alternative for novice surgeons

training. Even though several simulators are available on the

market claiming successful training experiences, their use is

extremely limited due to the economic costs involved. In this

work, we present a low-cost laparoscopy simulator able to

monitor and assist the trainee’s surgical movements. The

developed prototype consists of a set of inexpensive sensors,

namely an accelerometer, a gyroscope, a magnetometer and a

flex sensor, attached to specific laparoscopic instruments. Our

approach allows repeated assisted training of an exercise,

without time constraints or additional costs, since no human

artificial model is needed. A case study of our simulator

applied to instrument manipulation practice (hand-eye coordination) is also presented.

Keywords-surgical training; laparoscopy; serious games

I. INTRODUCTION

Laparoscopy is a minimally invasive surgery (MIS) technique performed in the abdomen or pelvis through small incisions (usually between 0.5 and 1.5 cm), through which a camera and surgical instruments are inserted in order to inspect and diagnose the patient medical condition or perform surgery procedures. When compared to open surgical procedures, laparoscopic approaches offer several advantages, such as (a) reduced hospitalization time, (b) minimal postoperative pain, which leads to a decreased demand of painkillers, (c) reduced exposure of internal organs to possible external contaminants, thereby reducing risk of infections, and (d) minimal functional and aesthetic damage [1,2]. However, indirect access to some internal structures makes this kind of procedures complex to perform, requiring surgeons to develop quite accurate technical skills, such as camera navigation, hand-eye coordination, gesture accuracy, precision and speed. The mastering of these techniques are essential for laparoscopic surgery practice, even for performing simple surgical operations such as translocation, cutting, dissection, grasping and suturing.

In the early days of laparoscopic surgery, the acquisition of these skills was mainly achieved by repeatedly practicing within the operating theatre, under the guidance of an

experienced surgeon. As a result, laparoscopy was associated with high rates of complications, particularly during surgeons’ early experience [3]. To mitigate these consequences, animal models, such as pigs and rabbits, start being used, but not without introducing some further problems, such as the concern about infectious diseases transmission [4] and a significant costs increase due to the amount of animals needed to practice with.

All the above mentioned constraints in laparoscopy training, associated with ethical, medical-legal and educational considerations, urged the research for better approaches to accommodate the need for more and improved training, ultimately leading to the appearance of simulators. For many years, virtual reality (VR) technology has been successfully used for training in the aviation industry, as well as in other high reliability work environments [5,6]. Medicine practice is an example of such environments where VR is being applied with increasing positive results. Some have already proven the benefits of using VR simulators in laparoscopic surgery [7-10], contributing to an improvement on current surgeons surgical performance. So far, some simulators are being commercialized, such as Lap Mentor, ProMIS, LapSim, SimSurgery, LAGB or LTS3e. If one abstracts from all the technical details that differentiate these simulators, they all provide VR frameworks that allow training of skills, knowledge and judgment for laparoscopic surgery [9,11-13]. Additionally, these simulators present a haptic interface, offering tactic and force feedback, which is important to mimic the surgical scenario [14]. As a downside, such interfaces are usually expensive, making them unsuitable for small and medium size hospitals or for private use. To overcome economic limitations, others claim to have achieved low-cost systems [15]. However, these are not truly VR solutions – since there is little or no virtual simulation involved – and imply continuous additional costs, in particular for purchasing physical human artificial models for surgical training purposes.

In [16], Jambon et al. presented a low-cost alternative, using potentiometer sensors for motion detection. Although presenting some similarities to our work, the authors focused their study in gynecologic laparoscopy, and the system is technologically out-dated regarding VR simulation and graphical realism.

Sokollik et al. [17] presented a 3D ultrasound measurement system for motion capture, capable of determining the laparoscopic instruments position from spatial coordinators of small ultrasound transmitters. Using an active tracking system, Chmarra et al. [18] presented a

Page 2: A laparoscopic surgery training interface

low-cost device, with four degrees of freedom (DOF), to track MIS instruments using optical computer mouse sensors and a gimbal mechanism, allowing the use of this device in training setups with or without a VR environment. Similarly, in [19] the authors proposed a performance assessment system using a motion tracking device based on sensor fusion to combine sensor data from disparate sources: laparoscopic camera, magnetic kinematic sensors and reference information. Although instruments’ movement capture is performed with relatively accurate precision, the cost of such prototype is also relatively high, due to its magnetic sensors. In [20], the authors presented the ADEPT (Advanced Dundee Endoscopic Psychomotor Tester), consisting of a gimbal mechanism able to track and record the positioning and rotational movements from instruments. Polhemus (Polhemus, Colchester, USA) produced the Patriot, consisting of a combination of an electromagnetic transmitter and receiver, presenting accurate precision. In sum, despite these works utility on experience assessment and movement tracking, they lack the VR integration, usually delegating the task of specializing their solutions for real laparoscopic instruments tracking.

Our work aims to develop a low-cost laparoscopy simulator that presents a realistic simulation environment, like the one provided by some of the aforementioned works, while allowing the training of several specific laparoscopic skills. The final goal is the development of a laparoscopic surgery training prototype, able to provide on-going practice under private use for inciting maintenance and fine-motor skills improvement, crucial for surgeon practice. A set of inexpensive sensors, namely an accelerometer, a gyroscope, a magnetometer and a flex sensor, is used to acquire laparoscopic instruments’ movement through a microcontroller based circuit interface. Concerning VR simulation, XNA game development framework is used, combined with Blender 3D models. In what concerns the prototype testing, we present “Wire Loop”, a virtual digital game specifically developed for the training of hand-eye coordination, one of the most important techniques to master in laparoscopic surgery.

The remainder of this paper is organized as follows. Section II covers the methods, namely the requirements for the prototype, the hardware solution, data acquisition and communication, and system calibration. This section finalizes with the presentation of the application case-study. In section III, the results achieved with our prototype are described, oriented to evaluate the prototype accuracy and sensibility. Section IV presents the concluding remarks and future work.

II. METHODS

A. Requirements

During laparoscopic procedures, surgical instruments are controlled by a surgeon, whose movements are transmitted through the incision point, via a trocar (instrument shaft), to the tip of the instrument. As a result of this motion constraint, instead of six DOFs, the laparoscopic surgical tool only presents four fundamental DOFs: translation

around the instrument axis (insertion/withdrawal – 1st DOF); rotation around instrument axis (roll – 2nd DOF); left/right and forward/backward rotation around incision point (yaw and pitch - 3rd and 4th DOFs, respectively). Additionally, to track the forceps movements, one has to consider the rotation around its axis (5th DOF), as well as its controlled behavior (open/close – 6th DOF) (Fig. 1).

According to the work of Chmarra et al. [18], the development of a device which can detect and measure the MIS instruments motion with reliable results for a realistic simulator has to fulfill the following requirements set:

1. Ability to detect the four instrument fundamental DOFs – translation and rotation around instrument axis, as well as yaw and pitch around the incision point;

2. Allow the use of real surgery instruments in a box-trainer, with or without VR environment combination;

3. Present the appropriate accuracy and sensibility standards, as close as possible to reality;

4. Low-cost and easy to produce, in order to make it affordable for every medical facility and private use;

5. PC “Plug and Play” feature – ready to use;

6. Small size, for an easier carrying and mounting.

B. Hardware Solution

In order to meet the first requirement, the motion tracking system was conceived using a set of sensors appropriately combined (Fig. 2). Since it is fundamental to detect the instrument spatial orientation, associated with 2nd, 3rd and 4th DOFs, a six DOFs Inertial Measurement Unit (IMU) was associated with a magnetometer (HMC5843). The IMU integrates (a) a tri-axis accelerometer (ADXL335), which allows the acceleration measurement in any direction in space, and (b) a tri-axis gyroscope (LPR530AL for pitch and roll, and LY530ALH for yaw), which consolidates and corrects the accelerometer data, while providing additional

Figure 1. Degrees of freedom of a laparoscopic instrument.

Page 3: A laparoscopic surgery training interface

Figure 2. Sensors positioning on the MIS instrument.

information regarding the three spatial axes. A magnetometer was also used in order to correct the gyroscope and accelerometer data, serving primarily as a reference for yaw measurement. A simplified filtering algorithm inspired by Kalman filter [21] was implemented to combine the accelerometer and gyroscope outputs and obtain accurate information about spatial orientation.

Since there is a linear relationship between the handle opening and the forceps at the tip of the instrument, a flex sensor (FSL0093103ST) was used to assess the instrument’s handle opening. The flex sensor allows the determination of the forceps opening angle, through the conversion of a variable resistance to a voltage output (Fig. 3). This allows the measurement of the 6th DOF.

Even though a solution for capturing the 1st DOF movement is crucial for a complete prototype, one has deliberately decided not to address this movement in the current prototype, and therefore the developed digital game does not depend on it to achieve its training results.

The 5th DOF of the instrument is actually rarely used in practice, because during a surgery, both surgeon hands are used to manipulate different instruments, thus making it very difficult to perform such movement. Given the reduced importance of this DOF in training, one preferred to simplify the current prototype and not to consider this aspect.

Figure 3. Flex Sensor schematic (C = 100nF and R = 10kΩ).

For the interpretation of the proposed set of sensors, the second requirement (real surgery instruments use) is fulfilled, being an advantage when compared to other existing simulators, which only permit altered or unrealistic MIS instrument devices for simulation.

The low-cost requirement is also met as a set of inexpensive and very common sensors is employed. In addition, the sensors can be removed from the laparoscopic instrument, using a Velcro system. This feature allows the instrument sterilization, a very important requirement, given that one is resorting to real instruments used in the operating room and, in most cases, used by several surgeons. Second, this also allows the prototype to be sold separately, without need to include the actual surgical instruments, thus lowering the price associated with a possible business solution. The small-sized sensors satisfy the sixth requirement.

In the final envisioned commercial product, a box-trainer will be developed to simulate the patient’s body and to mimic the incision point with several perforated ball joints, providing a completed training set (Fig. 4).

C. Data Aquisition and Communication

The conversion and transmission of the data obtained by the sensors to the computer is essential to integrate the system in a virtual environment. To accomplish this, the ATmega164 microcontroller was used as a physical interface.

Since the magnetometer communicates through I2C (requiring two pins) and a set of seven other analogical inputs must be read at each update, one from the flex sensor and three from each of the remaining sensors (accelerometer and gyroscope), nine microcontroller pins are used for data transmission.

The used microcontroller has a 10-bit ADC integrated module which converts each analogical voltage input into an

Figure 4. Complete training set for laparoscopic simulator environment.

Page 4: A laparoscopic surgery training interface

output value in the range of 0 to 1023. Therefore, a conversion of the analogical-to-digital readings to physical units (ix) (g-units for accelerometer and degrees/second for gyroscope) is achieved through the following formula:

ix =

ax×Vdd

1023 – zl

ss

(1)

(ax – analog input; Vdd – voltage supply (mV); zl – voltage zero level (mV); ss – sensitivity (mV/g))

Inspired by the Kalman filter, a simplified algorithm, combining data from accelerometer and gyroscope was used to achieve accurate values (Vest), based on a gyroscope trust factor and past estimated data, according to the following formula:

Vest = Racc + gF × Rgyro

1 + gF (2)

(Racc – current accelerometer axis data; gF – gyroscope trust factor; Rgyro – previous estimation plus current gyroscope axis data)

The employed algorithm uses a fixed weight factor to perform the correction, whereas in the original Kalman filter the weight factor is updated based on measured noise from accelerometer data by an iterative process.

In what concerns the gyroscope data, a correction based on magnetometer data is applied to estimate gyroscope correct values. This allows the correct acquisition of yaw measurements.

After processing the data correction methods, the resulting values must be transmitted to the computer, by means of a communication protocol between the microcontroller and the user’s computer. A RS-232 protocol was chosen to exchange a frame which encapsulates a set of commands specifically designed to easily allow data transmission and configuration setups.

Since different computers have different processing capabilities, an initial configuration is required to establish the updating data velocity (from three possible choices), which rules the continuous mode output during gameplay. After continuous mode output activation, a sleep mode command must be sent to interrupt data update. Furthermore, the computer software must be capable of requesting single data outputs to perform the initial sensors’ calibration. For both kinds of output requests, the microcontroller should respond by sending the current sensor’s data. The syntax of the above mentioned commands from the PC to the microcontroller and vice-versa are summarized in Table I.

According to Table I, every command stores two bytes for error detection, which allows the confirmation of the authenticity of the frame received. Thus, in each communication between devices, the sender includes the sum of each byte value sent in the frame, so that the receiver may compare this number with the sum of bytes that it actually receives. In case of disparity, the receiver discards the given data, assuming it is corrupted. Otherwise, a valid frame is received and is dealt with accordingly.

TABLE I - COMMAND SYNTAX BETWEEN PC AND MICROCONTROLLER

PC to Microcontroller

@ byte

1 byte

2 byte

3 byte

4 byte

5 byte

6 #

Definition

Begin Frame Type Data Checksum End

@ 0 0 0 0 # Data

transmitting velocity,

corresponding to 30, 60 and 90 frames per

second

0 1

1 0

@ 0 1 - - # Single Data

Output

@ 1 0 - - # Continuous

Mode Output

@ 1 1 - - # Sleep Mode

Microcontroller to PC

@ Flex

Sensor Acc X

Acc Y

Acc Z

Gyro X

Gyro Y

Gyro Z

Check sum

#

Begin 4

bytes 4

bytes 4

bytes 4

bytes 4

bytes 4

bytes 4

bytes 2

bytes End

Acc – accelerometer; Gyro - gyroscope

D. System Calibration

The “Plug and Play” and removable features of our solution demand for a configuration process before each virtual training session. This is something very common in recent digital games that employ sensor interfaces (e.g., every Wii game contains a calibration phase of the motion tracking sensor attached). As in most games, this stage precedes our game start and comprehends two phases. The first phase deals with the maximum and minimum forceps opening acquisition, given by the flex sensor, in order to obtain the relationship between the forceps opening and the sensor’s output for its specific placement. The second phase aims at the 3D reference acquisition for the IMU/magnetometer, in order to calibrate its spatial orientation. To successfully accomplish the device calibration procedure, the following steps must be performed (Fig. 5):

1. Maximum voltage output reading from flex sensor associated with maximum forceps opening;

2. Minimum voltage output reading from flex sensor associated with forceps closing;

3. The three sensors’ output readings (accelerometer, gyroscope and magnetometer) associated with an initial horizontal instrument positioning.

After this phase, one is able to initiate the training game with full and accurate tracking of movements performed.

E. Application Case-Study

In order to test the previously described hardware, a virtual projection of a training game was developed. Among the different well-trained skills that a surgeon must develop to perform a MIS, one of the most important is the hand-eye

Page 5: A laparoscopic surgery training interface

coordination. The reason behind the importance of this skill is that a MIS is performed in a tri-dimensional space (e.g., the abdomen), but the surgeon can only visualize it on a flat screen (2D), which makes accurate and well-trained visual capability essential to correctly carry out the surgery. The lack of profundity sense is one of the main drawbacks when using a screen as an aiding surgical system. In laparoscopic surgery, such a lack of instrument orientation may lead to serious consequences, in particular lesions to organs and other tissues. Practicing is the proven solution to overcome these difficulties, and the authors believe that the game solution presented in this work can play a major role on familiarizing the surgeon to achieve a 3D spatial perception by visualizing a 2D screen.

In the developed digital game, the “wire loop game” principle is applied on a virtual environment, in order to train the visual-motor coordination.

The game consists on dislocating a ring around and through a wire-path, previously defined to train specific surgical skills. Each time the ring touches the wire, the player is penalized according to (3). The collision penalization refers to the distance between the ring and the wire surfaces; this is higher than zero when the models intercept each other or the wire model goes beyond the ring external diameter. The level score (FP) is then calculated according to (4), taking the calculated penalization into account.

cp = |𝑑𝑐| − (𝑟𝑖,𝑟 − 𝑟𝑤) (3)

FP = 100 – tt + ∑ 𝑐𝑝

lp (4)

(cp - collision penalization; dc – distance between ring center and wire center; ri,r – internal ring radius; rw – wire radius; lp - level ponderation;

tt - total play time)

To play the game, the player uses the physical surgical instrument, with all the sensors attached, to control the virtual one and to manipulate the virtual ring 3D positioning and make it move around the wire, avoiding touching it (Fig. 6).

One important characteristic of the game is the possibility to generate different training levels in order to improve specific skills and keep players motivated. The customization of the game levels and the difficulty is achieved by selecting different wire circuits, presenting different spatial configurations, and changing the ring’s diameter. When the trainee finishes a level, a more complex

one is presented, by changing one or both parameters, allowing the enhancement of the trainee skills. At the end of a training session, a final score is given according to the sum of the points achieved in all levels played. The XNA Framework (Microsoft) was the primarily technology used for the game development, while the three virtual models were created using Blender 3D modeling software.

Even though virtual realism of objects is an important aspect, especially for promoting a good gaming experience, virtual interaction assumes the main role in this kind of training games. To achieve a high quality of virtual interaction, one needs accurate and efficient collision detection algorithms to mimic realistic environments, as well as simulating the interaction between the instrument and the ring. However, the implementation of these algorithms can become quite heavy, in particular when not optimized, since millions of vertices may have to be tested in a short period of time. Therefore, it is essential to implement an efficient collision detection strategy, able to diminish the computational processing. A possible way of doing this is to divide the collision algorithm in detection layers. In the serious game development, as only rigid bodies are involved, the detection algorithm was simplified to two general layers (Fig. 7).

First, a collision test is performed between the global bounding structures of each model, so that if no collision exists, it is impossible that it may occur between the models contours. This greatly reduces the amount of polygons that have to be inspected for collision. On the other hand, if interception exists between the global collision limits, a second detection layer is applied, which performs a collision detection based on the instruments’ model definition and the data concerning the relative positioning of each model. This process is divided in different steps, according to the model geometry.

Figure 6. Objects interaction during gameplay.

Figure 5. Visual configuration interface.

Page 6: A laparoscopic surgery training interface

Figure 7. (a) Models bounding structures; (b) collision detection algorithm.

This layered collision detection algorithm allows a significant reduction in the computational effort involved since there is no need for the whole model to be tested. As the graphics can make the application heavier, limiting the number of models or reducing their definition can optimize and accelerate the simulation process.

To meet this requirement, an important feature of the XNA framework was used – the class bounding frustum. This consists on not rendering graphical components when they are not being captured by the camera.

According to the camera’s position and orientation, this is a simple method to lower the number of objects to be represented, thus diminishing the resources allocated to each frame. This is a very important feature of the implementation, especially to broaden its use to computers with lower processing capabilities.

III. RESULTS AND DISCUSSION

The main goal of this work is to implement a sensor-based monitoring system able to fulfill the different requisites associated to the medical scenario, presenting at the same time the needed accuracy to meet the medical high-standards.

In the current development stage of this project, the authors had not yet been able to conduct wide range trials with controlled groups of clinicians. However, some preliminary results concerning laboratorial testing of the developed prototype were already achieved.

The preliminary tests were performed in order to understand the behavior of each DOF alone, by means of its associated sensors. To this extent, a software program, specifically dedicated to simulate and inspect testing data, was developed for the movement’s representation and the associated errors measurement.

The flex sensor capabilities were tested, concluding that the forceps movements were possible in the whole movement range, with a resulting sensibility inferior to 1°. However, it was detected that the calibration performed by the user may be subject to additional errors, as the laparoscopic instrument handheld presents a slight looseness. In particular, the looseness allows the handheld to be moved even if the forceps are closed or on their maximum opening, which introduces errors to the registration of maximum and minimum forceps opening. This can be attenuated and even completely avoided if the user performs the correct calibration procedure, which relies on the registration of the position where the maximum forceps opening occur, in opposition to the position where the handle is at its maximum opening. The closing case follows a similar explanation to eliminate this kind of errors.

Regarding the instrument’s orientation, the individual DOF testing was performed for the IMU and magnetometer set.

For the 2nd DOF testing, the instrument was rotated around its axis and the maximum angle that could be measured without interfering with the remaining DOFs was recorded. The same procedure was used for the 3rd DOF - left-right direction – and for the 4th DOF – forward/backward rotation. From these tests, an approximate maximum rotation angle of 70° was registered for each way. Once again, when a correct calibration is employed, this value can be lowered to approximately 65°, given that the initial reference is biased.

A global test of the instrument was performed using the developed game (Fig. 6). A good correlation between the real and virtual movements was observed. However, further tests are needed, using different environments and a wider movement range, in order to obtain more precise, concrete and consistent conclusions about the utility of this prototype for its intended future application. Nevertheless, the obtained preliminary results are very promising, leading one to believe that this approach can indeed provide a low-cost and accurate training environment for laparoscopic surgery.

IV. CONCLUDING REMARKS AND FUTURE WORK

In this work, a motion tracking system solution for laparoscopic instruments was developed, using accurate low-cost sensors. The developed prototype targets MIS simulation, providing unique features, such as removable capability of the sensors attached to the surgical instruments and a “plug and play” logic. The combination of these characteristics makes this solution ideal for a wide range of situations, in particular for private use, fulfilling, at the same time, the main requirements identified by trainer clinicians for this kind of device. Additionally, the prototype was tested for the surgeons’ hand-eye coordination training system, through the implementation of a wire loop based game, demonstrating that the prototype can be used on a realistic simulator.

As future work, several improvements are already under development: (a) the overall prototype optimization, mainly on the sensors’ data processing, (b) an increase of the sensors number and type, especially for the 1st and 5th DOFs

Page 7: A laparoscopic surgery training interface

tracking, (c) the addressing of other surgical skills to be trained, along with their surgical instruments.

To achieve more realistic virtual environments, one needs to extend the collision detection from rigid bodies to soft bodies (e.g. organs), cloths (e.g. membranous tissues), and fluids (e.g. blood). In this case, the collision process becomes heavier and more complex, and the need for using physical motors becomes obvious.

Another important aspect of future work is to perform a human-independent sensibility test for each of the captured DOFs. Moreover, the use of more complex methods for error elimination on the data processing stage will also be a topic of future work, in order to improve the movements’ representation sensibility and reliability.

Finally, one would like to emphasize that this work is a first step in a much more ambitious project that targets the development of a realistic, low-cost, multi laparoscopic surgery simulator.

ACKNOWLEDGMENT

This work was supported by ‘‘Fundação para a Ciência e a Tecnologia”, Portugal (FCT) through the Postdoc grant referenced SFRH/BPD/46851/2008 and R&D project referenced PTDC/SAU-BEB/103368/2008.

REFERENCES

[1] J. M. Michael, “Minimally invasive and robotic surgery,” The Journal

of the American Medical Association, vol. 285, Feb. 2001, pp. 568 – 572, doi: 10.1001/jama.285.5.568.

[2] S. M. B. I. Botden and J. J. Jakimowicz, "What is going on in

augmented reality simulation in laparoscopic surgery?," Surgical Endoscopy and Other Interventional Techniques, vol. 23, Aug. 2009,

pp. 1693-1700, doi:10.1007/s00464-008-0144-1.

[3] D. J. Deziel, et al., "Complications of Laparoscopic Cholecystectomy - a National Survey of 4,292 Hospitals and an Analysis of 77,604

Cases," American Journal of Surgery, vol. 165, Jan. 1993, pp. 9-14.

[4] J. Shah and A. Darzi, "Simulation and skills assessment," Proc.

International Workshop on Medical Imaging and Augmented Reality 2001, Jun. 2001, pp. 5-9, doi:10.1109/MIAR.2001.930256.

[5] R. Flin, P. O’Connor, and K. Mearns, "Crew resource management:

improving team work in high reliability industries," Team Performance Management, vol. 8, 2002, pp. 68-78,

doi:10.1108/13527590210433366.

[6] K. Maschuw, I. Hassan, and D. Bartsch, "Surgical training using simulator. Virtual reality," Der Chirurg; Zeitschrift für alle Gebiete

der operativen Medizen, vol. 81, Jan. 2010, pp. 19-24, doi:10.1007/s00104-009-1757-1.

[7] C. R. Larsen, et al., "Effect of virtual reality training on laparoscopic

surgery: randomised controlled trial," BMJ, vol. 338, May 2009, p. b1802, doi:10.1136/bmj.b1802.

[8] K. S. Gurusamy, R. Aggarwal, L. Palanivelu, and B. R. Davidson,

"Virtual reality training for surgical trainees in laparoscopic surgery," Cochrane Database Systematic Reviews, vol. 21, Jan. 2009, p.

CD006575, doi:10.1002/14651858.CD006575.pub2.

[9] S. Shrestha, et al., "The role of simulator Promis2 in learning

laparoscopic skill," JNMA, Journal of the Nepal Medical Association, vol. 48, Jul. - Sep. 2009, pp. 221-225.

[10] D. M. Gaba, "The future vision of simulation in health care," Quality

and Safety in Health Care, vol. 13, Apr. 2004, pp. i2-i10, doi:10.1136/qshc.2004.009878.

[11] I. Ayodeji, M. Schijven, J. Jakimowicz, and J. Greve, "Face

validation of the Simbionix LAP Mentor virtual reality training module and its applicability in the surgical curriculum," Surgical

Endoscopy, vol. 21, Sep. 2007, pp. 1641-1649, doi:10.1007/s00464-007-9219-7.

[12] S. N. Buzink, R. H. M. Goossens, H. D. Ridder, and J. J. Jakimowicz,

"Training of basic laparoscopy skills on SimSurgery SEP," Minimally Invasive Therapy & Allied Technologies, vol. 19, Feb. 2010, pp. 35-

41, doi:10.3109/13645700903384468.

[13] J. S. Zhang, et al., "A novel laparoscopic surgery simulator: system and evaluation," International Conference on Information Technology

and Applications in Biomedicine (ITAB 2008), May 2008, pp. 467-470, doi:10.1109/ITAB.2008.4570601.

[14] L. Panait, et al., "The role of haptic feedback in laparoscopic

simulation training," Journal of Surgical Research, vol. 156, Oct. 2009, pp. 312-316, doi:10.1016/j.jss.2009.04.018.

[15] A. B. Dayan, A. Ziv, H. Berkenstadt, and Y. Munz, "A simple, low-cost platform for basic laparoscopic skills training," Surgical

Innovation, vol. 15, Jun. 2008, pp. 136-142, doi:Doi 10.1177/1553350608318142.

[16] A. C. Jambon, P. Dubois, and S. Karpf, "A low-cost training

simulator for initial formation in gynecologic laparoscopy," Proc. Lecture Notes in Computer Science – CVRMed-MRCAS’97, vol.

1205, Mar. 1997, pp. 347-356, doi:10.1007/BFb0029256.

[17] C. Sokollik, J. Gross, and G. Buess, "New model for skills assessment and training progress in minimally invasive surgery," Surgical

Endoscopy, vol. 18, Mar. 2004, pp. 495-500, doi:10.1007/s00464-003-9065-1.

[18] M. K. Chmarra, N. H. Bakker, C. A. Grimbergen, and J. Dankelman,

"TrEndo, a device for tracking minimally invasive surgical instruments in training setups," Sensors and Actuators A: Physical,

vol. 126, Feb. 2006, pp. 328-334, doi:10.1016/j.sna.2005.10.040.

[19] C. Feng, et al., "Surgical training and performance assessment using a motion tracking system," Proc. 2nd European Modeling and

Simulation Symposium (EMSS 2006), Oct. 2006. pp. 647-652.

[20] G. B. Hanna, T. Drew, P. Clinch, B. Hunter, and A. Cuschieri, "Computer-controlled endoscopic performance assessment system,"

Surgical Endoscopy, vol. 12, Jul. 1998, pp. 997-1000, doi:10.1007/s004649900765.

[21] R. J. Meinhold and N. D. Singpurwalla, “Understanding the Kalman filter,” The American Statistician, vol. 37, May 1983, pp. 123-127,

doi:10.2307/2685871.