Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University Clemson, South Carolina USA
Jan 04, 2016
Qualitative Vision-Based Mobile
Robot Navigation
Zhichao Chen and Stanley T. Birchfield
Dept. of Electrical and Computer Engineering
Clemson University
Clemson, South Carolina USA
Motivation
• Goal: Enable mobile robot to follow a desired trajectory in both indoor and outdoor environments
• Applications: courier, delivery, tour guide, scout robots
• Previous approaches:• Image Jacobian [Burschka and Hager 2001]
• Homography [Sagues and Guerrero 2005] • Homography (flat ground plane) [Liang and Pears 2002]
• Man-made environment [Guerrero and Sagues 2001]
• Calibrated camera [Atiya and Hager 1993]
• Stereo cameras [Shimizu and Sato 2000]
• Omni-directional cameras [Adorni et al. 2003]
Our approach
• Key intuition: Vastly overdetermined system(Dozens of feature points, one control decision)
• Key result: Simple control algorithm– Teach / replay approach using sparse feature points – Single, off-the-shelf camera– No calibration for camera or lens– Easy to implement (no homographies or Jacobians)
Preview of results
Tracking feature points
Kanade-Lucas-Tomasi (KLT) feature tracker• Automatically selects features using eigenvalues of 2x2 gradient
covariance matrix
• Automatically tracks features by minimizing sum of squared differences (SSD) between consecutive image frames
• Augmented with gain and bias to handle lighting changes
• Open-source implementation
WdJI x
dx
dx
2
22
WdJI x
dx
dx
2
)2
(2
[http://www.ces.clemson.edu/~stb/klt]
unknown displacement
gray-level images
W
TZ )()( xgxggradient of image
Handling lighting changes
original modified modifiedoriginal
Environmental conditions due to clouds blocking sun
Automatic gain controlof the camera
original KLT tracker
modified KLT tracker
original KLT tracker
modified KLT tracker
Teach-Replay
Teaching Phase
start
destination
detect features
trackfeatures
Replay Phase
trackfeatures
comparefeatures
current featuregoal feature
initial feature
goal feature
Qualitative decision ruleLandmark
image plane
Feature is to the right |uCurrent| > |uGoal| “Turn right”
Feature has changed sides sign(uCurrent) ≠ sign(uGoal) “Turn left”
No evidence“Go straight”
feature
funnel lane
Robot at goal
uGoal
uCurrent
Feature is to the right “Turn right”
Side change “Turn left”
The funnel lane at an angleLandmark
image plane
Robot at goal
feature
α
α α
funnel lane
No evidence“Go straight”
The funnel lane created by multiple feature points
α
α ambiguous area
Landmark #1
Landmark #2
Landmark #3
Feature is to the right “Turn right”
Side change “Turn left”
No evidence“Do not turn”
A simplified example
“Turn right” “Turn left”“Go straight”
Landmarkfeature
Robot at goal
funnel lanefunnel lanefunnel lanefunnel lane
“Go straight”“Go straight”“Go straight”
Qualitative control algorithm
GoalCurrent
GoalCurrent
u signu sign
and
uu
Voting schemeEach feature votes either
• “turn right”, or• “turn left”
Majority rules
Funnel constraints:
uGoal
uCurrent
uGoal
End of segment reachedWhen the mean squared error increases
Experimental results
Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot
Experimental results
Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot
Experimental results
Indoor Outdoor
Imaging Source Firewire camera Logitech Pro 4000 webcam
Conclusion• Approach
• teach-replay, comparing image coordinates of feature points
• qualitative decision rule (no Jacobians, homographies)
• Advantages • off-the-shelf camera
• no calibration (not even lens distortion)
• simple, easy to implement
• tested in both indoor and outdoor environments
• Future work• variable driving speed (sharp turns)• integration with other sensors (odometry, GPS)• obstacle avoidance