Quadrotor Vision-based Localization for Amphibious Robots in Amphibious Area Abstract – Considering imaging qualities and air-water medium changes, the localization of multiple amphibious robots in GPS-denied outdoors is a great challenge. This paper presents a vision-based localization approach for multiple amphibious robots in amphibious environment using a quadrotor hovering over the head of robots. In terms of the circular shape observed by the quadrotor on land, a shape and color-based detection method is designed to identify robots. An improved Hough transform was used to speed up the shape detection of our robot. Then we use the color information to identify different robots. In water, ASRobot is able to realize multiple motion with different configurations of legs. Therefore, in view of different shapes generated by different configurations, a multiple size-varying template matching method is utilized to recognize different robots in water. On account of the refraction of rays, the vision- based localization model was built in amphibious environment. Finally, experiments of localization were conducted, and the results verified the feasibility of the proposed vision-based localization approach for ASRobots. Index Terms – Amphibious Spherical Robots; vision-based localization; quadrotor-based localization I. INTRODUCTION In recent years, with the increasing demand of coastal and amphibious environments, exploration and research of this area are becoming crucial activities. Due to the limitations of a single robot, many researchers focus on the cooperation of multiple amphibious robots. In order to realize the cooperation of robots, challenges are present for obtaining location information. the localization of amphibious robots on land and in water needs to be settled firstly. The widely known method is Global Navigation Satellite System [1], which contains the commercially used Global Positioning System (GPS). However, as like wireless-based localization [xx], the signals from GNSS is easily disturbed, such as severe weather. Any robot that is submerged merely 20cm underneath surface will lost signal. Besides, the low accuracy is not suitable for the close-range localization of robots. Nevertheless, various approaches for underwater localization do exist and are widely proposed, such as Acoustic localization method [2], inertial navigation method [3]. Because higher frequency signals attenuate rapidly in water. In order to send signals to remote place, these devices, such as acoustic device, DVL, INS and sonar, are produced with large size. However, these methods are not suitable for small-scale robots. Besides these mainstream methods of underwater localization, cameras are the very useful for localization in performing short-range tasks. Using the forward-looking camera, Kim et al. proposed a novel vision-based localization method [4] with artificial landmarks using the forward- looking camera in structured underwater environments. Unlike the forward-looking camera, Carreras et al. proposed a down-looking camera-based localization approach [5,6] to estimate the position and orientation of an underwater robot using a coded pattern placed on the bottom of a water tank. In order to realize automated self-assembly of large Maritime structures, four overhead cameras were used to detect and position the structures marked with APRIL Tags [7,8] via the cv2cg package. The elaborate arrangement of the structured 3D environment or the robots limits the application of the marker-based localization. Unlike the marker-based method, Josep and Nuno et al. proposed a new close-range tracking system [9,10] for autonomous underwater vehicles (AUVs) navigating in a close formation, using active light beacons and computer vision. This proposed system allows the estimation of the pose and location of a target vehicle at short ranges with an omnidirectional camera in an extended field of view. Faessler et al. proposed a new pose estimation system [11] that includes multiple infrared LEDs and cameras with an infrared- pass filter. The quadrotor with LEDs was positioning by an observing ground robot equipped with a camera. The infrared LEDs can be detected by vision system, thus their position on the target object can be precisely determined. However, if the infrared LEDs and active light beacons are sheltered by the body of the robot or others, the location and pose of the robot will not be estimated. These maker-based methods also are limited. Get rid of the markers, active LEDs or lights, Shao et al. proposed a double-template matching-based algorithm [12] to track the robotic fish using an overhead camera. Using Huiming Xing 1,2 , Shuxiang Guo 1,2,3* , Liwei Shi 1,2* , Xihuan Hou 1,2 , Yu Liu 1,2 , Yao Hu 1,2 , Debin Xia 1,2 Zan Li 1,2 1 Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, the Ministry of Industry and Information Technology, School of Life Science, Beijing Institute of Technology, No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China 2 Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China 3 Faculty of Engineering, Kagawa University, 2217-20 Hayashi-cho, Takamatsu, Kagawa, Japan E-mails: [email protected]; [email protected]; [email protected], * Corresponding author
6
Embed
Quadrotor Vision-based Localization for Amphibious Robots ... · Quadrotor Vision-based Localization for Amphibious Robots in Amphibious Area Abstract – Considering imaging qualities
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Quadrotor Vision-based Localization for Amphibious
Robots in Amphibious Area
Abstract – Considering imaging qualities and air-water
medium changes, the localization of multiple amphibious robots
in GPS-denied outdoors is a great challenge. This paper presents
a vision-based localization approach for multiple amphibious
robots in amphibious environment using a quadrotor hovering
over the head of robots. In terms of the circular shape observed
by the quadrotor on land, a shape and color-based detection
method is designed to identify robots. An improved Hough
transform was used to speed up the shape detection of our robot.
Then we use the color information to identify different robots. In
water, ASRobot is able to realize multiple motion with different
configurations of legs. Therefore, in view of different shapes
generated by different configurations, a multiple size-varying
template matching method is utilized to recognize different
robots in water. On account of the refraction of rays, the vision-
based localization model was built in amphibious environment.
Finally, experiments of localization were conducted, and the
results verified the feasibility of the proposed vision-based
localization approach for ASRobots.
Index Terms – Amphibious Spherical Robots; vision-based
localization; quadrotor-based localization
I. INTRODUCTION
In recent years, with the increasing demand of coastal and
amphibious environments, exploration and research of this
area are becoming crucial activities. Due to the limitations of a
single robot, many researchers focus on the cooperation of
multiple amphibious robots. In order to realize the cooperation
of robots, challenges are present for obtaining location
information. the localization of amphibious robots on land and
in water needs to be settled firstly.
The widely known method is Global Navigation Satellite
System [1], which contains the commercially used Global
Positioning System (GPS). However, as like wireless-based
localization [xx], the signals from GNSS is easily disturbed,
such as severe weather. Any robot that is submerged merely
20cm underneath surface will lost signal. Besides, the low
accuracy is not suitable for the close-range localization of
robots. Nevertheless, various approaches for underwater
localization do exist and are widely proposed, such as
[3]. Because higher frequency signals attenuate rapidly in
water. In order to send signals to remote place, these devices,
such as acoustic device, DVL, INS and sonar, are produced
with large size. However, these methods are not suitable for
small-scale robots.
Besides these mainstream methods of underwater
localization, cameras are the very useful for localization in
performing short-range tasks. Using the forward-looking
camera, Kim et al. proposed a novel vision-based localization
method [4] with artificial landmarks using the forward-
looking camera in structured underwater environments.
Unlike the forward-looking camera, Carreras et al. proposed a
down-looking camera-based localization approach [5,6] to
estimate the position and orientation of an underwater robot
using a coded pattern placed on the bottom of a water tank. In
order to realize automated self-assembly of large Maritime
structures, four overhead cameras were used to detect and
position the structures marked with APRIL Tags [7,8] via the
cv2cg package. The elaborate arrangement of the structured
3D environment or the robots limits the application of the
marker-based localization.
Unlike the marker-based method, Josep and Nuno et al.
proposed a new close-range tracking system [9,10] for
autonomous underwater vehicles (AUVs) navigating in a
close formation, using active light beacons and computer
vision. This proposed system allows the estimation of the pose
and location of a target vehicle at short ranges with an
omnidirectional camera in an extended field of view. Faessler
et al. proposed a new pose estimation system [11] that
includes multiple infrared LEDs and cameras with an infrared-
pass filter. The quadrotor with LEDs was positioning by an
observing ground robot equipped with a camera. The infrared
LEDs can be detected by vision system, thus their position on
the target object can be precisely determined. However, if the
infrared LEDs and active light beacons are sheltered by the
body of the robot or others, the location and pose of the robot
will not be estimated. These maker-based methods also are
limited. Get rid of the markers, active LEDs or lights, Shao et
al. proposed a double-template matching-based algorithm [12]
to track the robotic fish using an overhead camera. Using
Huiming Xing1,2, Shuxiang Guo1,2,3*, Liwei Shi 1,2*, Xihuan Hou1,2, Yu Liu1,2, Yao Hu1,2, Debin Xia1,2 Zan Li1,2 1 Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, the Ministry of Industry and
Information Technology, School of Life Science, Beijing Institute of Technology,
No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China
2 Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, No.5,
Zhongguancun South Street, Haidian District, Beijing 100081, China 3 Faculty of Engineering, Kagawa University, 2217-20 Hayashi-cho, Takamatsu, Kagawa, Japan