Top Banner

of 17

Lecture5 Vision Sensor Part2

May 30, 2018

Download

Documents

jack2423
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/14/2019 Lecture5 Vision Sensor Part2

    1/17

    Depth from Focus (1)

    L diameter of the lens or

    aperture

    Image formation formula:R - the radius of the blur circle on

    the image plane

  • 8/14/2019 Lecture5 Vision Sensor Part2

    2/17

    Depth from Focus (2)

    Measure of sub-image intensity (I) gradient:

    Limitation of depth from focus

    techniques:(1) They lose sensitivity as

    objects move farther away (given

    a fixed focal length)

    (2) Slow focusing

    Sharpness change

    between the near

    surface and far

    surface

    The lens optics are actively searched in order to maximize focus. Not for mobile robots

    due to its slow focusing.

  • 8/14/2019 Lecture5 Vision Sensor Part2

    3/17

  • 8/14/2019 Lecture5 Vision Sensor Part2

    4/17

    Depth from Defocus

    The second equation relates the depth of scene points viaR to the

    observed image g. SolvingR would provide us the depth.

    Another unknownf(x, y) the focused image which can be obtainedby a pinhole aperture lens model.

    In summary, the basic advantage of the depth from defocus method is

    its extremely fast speedNo correlation search problem

    No need to capture scene at different perspectives, which may lead to

    occlusions and the disappearance of objects in a second view

    Disadvantage: accuracy decreases with distance, as with all visual

    methods for ranging.

  • 8/14/2019 Lecture5 Vision Sensor Part2

    5/17

    Stereo Vision

    Idealized camera geometry for stereo vision

    Disparity between two images -> Computing of depth

    From the figure it can be seen that

  • 8/14/2019 Lecture5 Vision Sensor Part2

    6/17

    Stereo Vision

    1. Distance is inversely proportional to disparity

    closer objects can be measured more accurately

    2. Disparity is proportional to b.

    For a given disparity error, the accuracy of the depth estimate

    increases with increasing baseline b.

    However, as b is increased, some objects may appear in one camera,

    but not in the other.

    3. A point visible from both cameras produces a conjugate pair.

    Conjugate pairs lie on epipolar line (parallel to the x-axis for the

    arrangement in the figure above)

  • 8/14/2019 Lecture5 Vision Sensor Part2

    7/17

    Stereo Vision the general case

    The same point P is measured differently in the left camera image :

    where

    R is a 3 x 3 rotation matrix

    r0 = offset translation matrix

    The above equations have two uses:

    We can find rrif we knew R and rl and r0. Note:For perfectly aligned cameras R=I (unity matrix)

    We can calibrate the system and find r11, r12 given corresponding values of xl, yl, zl, xr, yrand zr.

    We have 12 unknowns and require 12 equations:

    we require 4 conjugate points for a complete calibration.

    Note: Additionally there is a optical distortion of the image

    left camera

    coordinate system

    P

    yl

    xl

    zl

    yr

    x

    zr

    rl

    rr

    right camera

    coordinate system

  • 8/14/2019 Lecture5 Vision Sensor Part2

    8/17

    Stereo Vision the general case

    Suppose the calibration is complete

    We know the pixels ofP on the image planes of each camera (xl, yl, zl), and (xr, yr,

    zr). Given the focal lengthfof the camera, we have

    '

    'and

    '

    '

    l

    ll

    l

    ll

    z

    y

    f

    y

    z

    x

    f

    x==

    '')(

    '')(

    '')(

    03333231

    02232221

    01131211

    rl

    ll

    rr

    lll

    rr

    lll

    zrzrf

    y

    rf

    x

    r

    zf

    yrzr

    f

    yr

    f

    xr

    zf

    xrzr

    f

    yr

    f

    xr

    =+++

    =+++

    =+++

    The same process can be

    used to identify values forx

    andy

    Now we want to recover 'and' rl zz

  • 8/14/2019 Lecture5 Vision Sensor Part2

    9/17

    Stereo Vision

    Calculation of Depth

    The key problem in stereo is now how do we solve the correspondenceproblem?

    Gray-Level Matching match gray-level wave forms on corresponding epipolar lines

    brightness = image irradiance I(x,y)

    Zero Crossing of Laplacian of Gaussian is a widely used approach foridentifying feature in the left and right image

  • 8/14/2019 Lecture5 Vision Sensor Part2

    10/17

    Zero Crossing of Laplacian of Gaussian

    Identification of features that are stable and match well

    Laplacian of intensity image

    Convolution with P:

    Step / Edge Detection

    in Noisy Image

    filtering through

    Gaussian smoothing

    P=

  • 8/14/2019 Lecture5 Vision Sensor Part2

    11/17

    Stereo Vision Example

    Extracting depth information from a stereo image

    a1 and a2: left and right image

    b1 and b2: vertical edge filtered

    left and right image;filter = [1 2 4 -2 -10 -2 4 2 1]

    c: confidence image:

    bright = high confidence (good texture)

    d: depth image:

    bright = close; dark = far

  • 8/14/2019 Lecture5 Vision Sensor Part2

    12/17

    SVM Stereo Head Mounted on an All-terrain Robot

    Stereo Camera

    Vider Desing

    www.videredesign.com Robot

    Shrimp, EPFL

    Application of Stereo Vision Traversability calculation based on

    stereo images for outdoor navigation

    Motion tracking

  • 8/14/2019 Lecture5 Vision Sensor Part2

    13/17

    Color Tracking Sensors

    Color represents an environmental characteristic that is orthogonal to

    range, and it represents both a natural cue and an artificial cue that can

    provide new information to a mobile robot

    Advantages

    Detection of color is straightforward

    It can combine (sensor fusion) with existing cues, such as range findings,to have significant information gains

  • 8/14/2019 Lecture5 Vision Sensor Part2

    14/17

    Color Tracking Sensors

    Motion estimation of ball and robot for soccer playing using color

    tracking

  • 8/14/2019 Lecture5 Vision Sensor Part2

    15/17

    Color Tracking Sensors

    CMUcam robotic vision sensor

    http://www.cs.cmu.edu/~cmucam/gallery.html

    CMOS imaging sensor and high-speed microprocessors at 50+ Mhz range

    An external processor configures the sensors streaming data mode, such asspecifying tracking mode for a bounded YUV value set.

    The YUVmodel defines a color space in terms of one luminance (Y) and twochrominance (U and V) components. .

    The vision sensor processes the data in real-time and outputs high-levelinformation to the external consumer.

    http://www.cs.cmu.edu/~cmucam/gallery.htmlhttp://www.cs.cmu.edu/~cmucam/gallery.html
  • 8/14/2019 Lecture5 Vision Sensor Part2

    16/17

    Homework #4

    Has been posted on Stevens Pipeline

    Prepare your project proposal and present your proposal on Oct. 15.

    The mid-term exam grade will be based on your presentation of theproposal

  • 8/14/2019 Lecture5 Vision Sensor Part2

    17/17

    What should you include in your proposal?

    Introduction: what kind of problem you want to solve? Why this problem

    is important for the mobile robot? What other people have done for this

    problem (list several reference papers)?

    Approach: How are you going to solve this problem? You can either pick

    one approach which is available from papers and improve it at some

    levels, or propose some new ideas. (Extra credit will be given to the new

    ideas).At this level, you dont have to specify in details how to tackle the

    problem, only indicate which method you will focus on to improve, or

    what kind of draft idea you may want to pursuit.