This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Shintani, M. et al.
Paper:
Object Grasping Instructions to Support Robot by Laser BeamOne Drag Operations
Momonosuke Shintani∗, Yuta Fukui∗, Kosuke Morioka∗, Kenji Ishihata∗,
Satoshi Iwaki∗, Tetsushi Ikeda∗, and Tim C. Luth∗∗
∗Hiroshima City University
3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
E-mail: {shintani, fukui, morioka, ishihata, iwaki, ikeda}@robotics.info.hiroshima-cu.ac.jp∗∗Technical University of Munich (TUM)
fee cup)” for the following reason. We interviewed the
test subject about how he behaved at the said experiment
and he responded that “I repeatedly practiced to hit the
laser at the object accurately on a trial and error basis, re-
gardless of the instruction operations and without caring
about the operation time.” Therefore, we determined that
he significantly deviated from the action instructed by the
experiment planner and that the said data should not be
included in the instruction time measurement evaluation,
but should be treated as an outlier. The overall grasping
success rate was 78.1%. Fig. 16 shows the ratios of the
drag modes selected by the test subjects in the real ex-
periment and the number of successes and failures. We
describe below the considerations we have acquired from
the experimental results.
The success rate of grasping C (PET bottle) in the real
experiment was 87.5%. Six test subjects selected the right
drag mode to provide grasping instructions, although in-
structions could be easily given with either the right or
left drag mode. This seems attributable to the fact that as
the PET bottle grasping was planned immediately before
Journal of Robotics and Mechatronics Vol.33 No.4, 2021 763
Shintani, M. et al.
Fig. 15. Measurement results of instruction time to grasp objects by eight subjects.
Fig. 16. Drag modes selected at real experiment and success
or failure.
the experiments began, they may have been greatly influ-
enced by their successes experienced at the immediately
prior practices. As the mean instruction time to grasp C in
the real experiments was the shortest and its interquartile
range was the narrowest among all objects, we consider
that the users were able to plan object grasping strategies
and drag the laser without hesitation.
The success rate of grasping A (coffee cup) in the real
experiment was 62.5%. Five test subjects were able to
easily give instructions and grasp C with the right drag
mode, as in A of Fig. 14, in the same way as for C. Never-
theless, among the 6 types of objects, only the instruction
time for A increased in the real experiment compared to
that in the practice experiment. This is attributable to the
fact that three test subjects aggressively tried to grasp A at
the grasp part, which was considered difficult to grasp. In
other words, as the grasp part was nearly semi-ring shaped
with a maximum width as narrow as approximately 1 cm,
it would take a very long time to irradiate the laser ac-
curately to such a narrow part. In fact, their instructions
turned out to be not as they aimed, which resulted in total
failures in all grasping operations.
The success rate of grasping F (plastic plate) in the
real experiment was 75%. We expected that the grasp-
ing method and strategy used by the subjects would be
either to grasp it at its edge from above by using the drag
trajectory shown in Fig. 14 or to grasp it sideways at the
part protruding from the base with the right drag mode.
However, more right drags were used to grasp it. More-
over, as the area where the laser can be dragged is much
smaller than that of A, the grasping instruction time in-
creases. In addition, the widest interquartile range among
the grasping objects seems to prove the difficulties in pro-
viding grasping instructions.
The success rate of grasping G (sponge) in the real ex-
periment was 100%. As it has a simple shape, flexibility,
and moderate friction, it was expected to be the easiest
object to grasp, even for beginners. Compared to that in
the practice experiment, the grasping instruction time in
the real experiment was wide in the maximum to min-
imum range and in the interquartile range. This seems
attributable to the fact that the instructions to grasp G, us-
ing its corners, were different from those for C, A, and
F. Further, it was so easy to plan its grasping strategy by
either drag mode that the test subjects rather hesitated to
select the drag mode, taking a long time as a result. As the
grasping instruction time was relatively shorter than that
of the other objects, its degree of difficulty should be as
low as expected.
The success rate of grasping K (food bag) in the real ex-
periment was 87.5%. Out of the eight test subjects, seven
subjects selected the left drag mode to grasp it. As it has
no simple convex shape but side faces of small areas, it
seems to have been difficult to plan any other strategy than
the left drag. As they selected the left drag without hesita-
tion in the real experiment, the mean and second quartiles
turned out to be lower in this experiment.
The success rate of grasping J (wrapping container) in
the real experiment was 75%. As it was the sixth grasping
trial, and thus the degrees of familiarization had gradually
improved, the test subjects dared to grasp it in a differ-
ent way from the one they have used in the practice ex-
periment. Therefore, they selected the right drag mode
relatively more frequently than the left drag mode. The
relatively short grasping instruction time seems to be at-
tributable to their familiarization effects.
From the above-mentioned considerations for each ob-
ject, we can discuss as a whole the following.
In the case of objects with geometrically simple shapes,
where a sufficiently wide grasp part’s plane region can be
generated, it is relatively easy with either drag mode to
give instructions to grasp them, and the grasping success
rates are high.
764 Journal of Robotics and Mechatronics Vol.33 No.4, 2021
Grasping Instructions to Robot by Laser One Drag Operations
In the case of objects with complex shapes, however,
we need to consider in detail how to generate a grasp
part’s plane region in accordance with the grasp part’s
shape. Thus, efficient use of the system requires sufficient
familiarization.
To minimize the number of the instructions in this
study, we adopted a one-drag approach and limited it to
making the xxxGzzzG plane in ∑G and the grasp part’s plane
parallel to each other, so that we cannot set them at right
angles to each other only by selecting the left and right
drag modes. In the future, therefore, we may need to con-
sider increasing the number of modes that can be selected
by adding some geometrical meaning to the one-drag tra-
jectory.
The causes for grasping failures are roughly classified
into inaccuracies in instructions and low accuracy in robot
control. The former is mainly attributable to the fact that
if the grasp part is relatively small, the reliability of the
point group data obtained by real-world drag is degraded,
making it difficult to calculate a numerically stable grasp
part’s plane region. The latter seems mainly attributable
to the experimental environment where we had to use the
mobile robot’s degrees of freedom, which is lower in ac-
curacy than the 5 DOF of the robot arm in order to realize
arbitrary attitudes of the gripper.
We also found that the smaller the grasp part and the
more distant the object, the more difficult it is to secure a
stable grasp part’s plane region. In the future, therefore,
we may need to consider a system where, for example,
only grasping points are specified by RWC while grasping
attitudes are instructed by some other means.
5. Conclusion
We have proposed an object-grasping instruction sys-
tem capable of intuitively instructing not only object
grasping points but also grasping attitudes with one laser
drag by expanding our conventional system using RWC.
We have proved the basic validity of the proposed system
through instruction and grasping experiments using many
different types of daily use items. We also evaluated the
proposed system’s usability as an instruction system and
the experimental system’s grasping performance through
experiments with multiple test subjects. As a major re-
sult, the proposed system achieved an overall success rate
of over 70% in the experiments where eight test subjects
attempted to grasp 6 types of objects, as shown in Fig. 14.
In the future, we will proceed with more detailed evalu-
ation experiments and engage in solving the issues with
the current system as described in Section 4.3.2. Based
on the development, we expect to put the proposed system
in practical use as soon as possible as an object grasping
technology for nursing care and life support robots mainly
aimed at lower-limb movement handicapped persons.
Acknowledgements
This study was supported in part by the JSPS Grants-in-Aid
for Scientific Research JP18K12151 and in part by the Suzuken
Memorial Foundation.
References:[1] Y. Abiko, S. Nakasako, Y. Hidaka, S. Iwaki, and K. Taniguchi,
“Linkage of Virtual Object and Physical Object for Teaching toCaregiver-Robot,” Proc. of the 24th Int. Conf. on Artificial Realityand Telexistence, doi: 10.2312/ve.20141373, 2014.
[2] Y. Abiko, Y. Hidaka, and S. Iwaki, “Fundamental Study on Head-motion Pointing System for Seamless Access to Objects in both PCand Real World,” Trans. of the Society of Instrument and ControlEngineers, Vol.52, No.2, pp. 77-85, 2016.
[3] Y. Abiko, Y. Hidaka, K. Sato, S. Iwaki, and T. Ikeda, “Real WorldClick with a TOF Laser Sensor Set on a Pan-tilt Actuator and Its Ap-plication for Teaching a Life Support Robot,” Trans. of the Societyof Instrument and Control Engineers, Vol.52, No.11, pp. 614-624,2016 (in Japanese).
[4] K. Sato, Y. Hidaka, S. Iwaki, and T. Ikeda, “Pointing PerformanceImprovement of Real World Clicker System with a TOF Laser Sen-sor Set on a Pan-tilt Actuator – Proposal of Laser Spot Marker inthe Viewing Window –,” Trans. of the Society of Instrument andControl Engineers, Vol.54, No.2, pp. 290-297, 2018.
[5] Q. Bai, S. Li, J. Yang, Q. Song, Z. Li, and X. Zhang, “Object Detec-tion Recognition and Robot Grasping Based on Machine Learning:A Survey,” IEEE Access, Vol.8, pp. 181855-181879, 2020.
[6] J. Hatori, Y. Kikuchi, S. Kobayashi, K. Takahashi, Y. Tsuboi, Y.Unno, W. Ko, and J. Tan, “Interactively Picking Real-World Ob-jects with Unconstrained Spoken Language Instructions,” Proc. of2018 IEEE Int. Conf. on Robotics and Automation (ICRA 2018),pp. 3774-3781, 2018.
[7] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learn-ing hand-eye coordination for robotic grasping with deep learn-ing and large-scale data collection,” Int. J. of Robotics Research,Vol.37, Nos.4-5, pp. 421-436, 2018.
[8] H. O. Song, M. Fritz, D. Goehring, and T. Darrell, “Learning to de-tect visual grasp affordance,” IEEE Trans. Autom. Sci. Eng., Vol.13,No.2, pp. 798-809, 2016.
[9] V. Satish, J. Mahler, and K. Goldberg, “On-Policy Dataset Synthe-sis for Learning Robot Grasping Policies Using Fully ConvolutionalDeep Networks,” IEEE Robotics and Automation Letters, Vol.4,No.2, pp. 1357-1364, 2019.
[10] Y. Chao, X. Chen, and N. Xiao, “Deep learning-based grasp-detection method for a five-fingered industrial robot hand,” IETComput. Vis., Vol.13, No.1, pp. 61-70, 2019.
[11] Y. Xu, L. Wang, A. Yang, and L. Chen, “GraspCNN: Real-timegrasp detection using a new oriented diameter circle representa-tion,” IEEE Access, Vol.7, pp. 159322-159331, 2019.
[12] C. M. O. Valente, A. Schammass, A. F. R. Araujo, and G. A.P. Caurin, “Intelligent Grasping Using Neural Modules,” Proc. of1999 IEEE Int. Conf. on Systems, Man, and Cybernetics (IEEESMC’99), pp. 780-785, 1999.
[13] K. Harada, “Manipulation Research,” J. of the Robotics Society ofJapan, Vol.31, No.4, pp. 320-325, 2013.
[14] C. C. Kemp, C. D. Anderson, H. Nguyen, A. J. Trevor, and Z. Xu,“A Point-and-Click Interface for the Real World: Laser Designationof Objects for Mobile Manipulation,” Proc. of the 3rd ACM/IEEEInt. Conf. on Human-Robot Interaction (HRI), pp. 241-248, 2008.
[15] H. Nguyen, A. Jain, C. Anderson, and C. C. Kemp, “A ClickableWorld: Behavior Selection Through Pointing and Context for Mo-bile Manipulation,” Proc. of 2008 IEEE/RJS Int. Conf. on Intelli-gent Robots and Systems, pp. 787-793, 2008.
[16] K. Ishii, S. Zhao, M. Inami, T. Igarashi, and M. Imai, “Design-ing Laser Gesture Interface for Robot Control,” Proc. of the 12thIFIP Conf. on Human-Computer Interaction, (INTERACT 2009),T. Gross et al. (Eds.), “Human-Computer Interaction – INTERACT2009,” Springer, pp. 479-492, 2009.
[17] G. Hirzinger, “The space and telerobotic concepts of the DFVLRROTEX,” Proc. of 1987 IEEE Int. Conf. on Robotics and Automa-tion, pp. 443-449, 1987.
[18] T. Nunogaki and L. Joo-Ho, “Graspable area presentation bymonocular camera based simple modeling for supporting manip-ulator teleoperation,” Proc. of JSME Annual Conf. on Robotics andMechatronics (ROBOMEC 2013), 2A1-F03, 2013 (in Japanese).
[19] R. Balasubramanian, L. Xu, P. D. Brook, J. R. Smith, and Y. Mat-suoka, “Physical Human Interactive Guidance: Identifying Grasp-ing Principles From Human-Planned Grasps,” IEEE Trans. onRobotics, Vol.28, No.4, pp. 899-910, 2012.
Journal of Robotics and Mechatronics Vol.33 No.4, 2021 765
Shintani, M. et al.
[20] K. Nagata, T. Miyasaka, Y. Kanamiya, N. Yamanobe, K.Maruyama, S. Kawabata, and Y. Kawai, “Grasping an IndicatedObject in a Complex Environment,” Trans. of the Japan Society ofMechanical Engineers, Series C, Vol.79, No.797, pp. 27-42, 2013.
[21] P. Michelman and P. Allen, “Shared autonomy in a robot handteleoperation system,” Proc. of IEEE/RSJ Int. Conf. on IntelligentRobots and Systems (IROS’94), Vol.1, pp. 253-259, 1994.
Name:Momonosuke Shintani
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:2020 Graduated from Hiroshima City University
Name:Yuta Fukui
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:2019 Graduated from Hiroshima City University
2019-2021 Master’s Course Student, Hiroshima City University
Name:Kosuke Morioka
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:2019 Graduated from Hiroshima City University
Name:Kenji Ishihata
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:2018 Graduated from Hiroshima City University
2018-2020 Master’s Course Student, Hiroshima City University
Name:Satoshi Iwaki
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:1984 Received M.E. from Hokkaido University
1984- Nippon Telegraph and Telephone Corp.
2007- Professor, Graduate School of Informatics, Hiroshima City
University
Membership in Academic Societies:• The Japan Society of Mechanical Engineers (JSME)
• The Society of Instrument and Control Engineers (SICE)
• The Robotics Society of Japan (RSJ)
• The Institute of Electrical and Electronics Engineers (IEEE)
Name:Tetsushi Ikeda
Affiliation:Hiroshima City University
Address:3-4-1 Ozukahigashi, Asaminami, Hiroshima, Hiroshima 731-3194, Japan
Brief Biographical History:1997 Received M.E. from Kyoto University
1997- Mitsubishi Electric Corp.
2016- Lecturer, Graduate School of Informatics, Hiroshima City
University
Membership in Academic Societies:• The Institute of Electronics, Information, and Communication Engineers
(IEICE)
• The Society of Instrument and Control Engineers (SICE)
766 Journal of Robotics and Mechatronics Vol.33 No.4, 2021
Grasping Instructions to Robot by Laser One Drag Operations