3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4

Post on 06-Feb-2016

31 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4. Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving. 3D Shape from X means getting 3D coordinates from different methods. shading silhouette texture stereo - PowerPoint PPT Presentation

Transcript

3D Sensing and ReconstructionReadings Ch 12 125-6 Ch 13 131-3 1394

bull Perspective Geometry

bull Camera Model

bull Stereo Triangulation

bull 3D Reconstruction by Space Carving

3D Shape from Xmeans getting 3D coordinates

from different methods

bull shadingbull silhouettebull texture

bull stereo bull light stripingbull motion

mainly research

used in practice

Perspective Imaging Model 1D

xi

xf

f

This is the axis of the real image plane

O O is the center of projection

This is the axis of the frontimage plane which we usezc

xc

xi xc

f zc

=

camera lens

3D objectpoint

B

D

E

image of pointB in front image

real imagepoint

Perspective in 2D(Simplified)

P=(xcyczc) =(xwywzw)

3D object point

xc

yc

zw=zc

yi

Yc

Xc

Zc

xi

F f

cameraPacute=(xiyif)

xi xc

f zc

yi yc

f zc

=

=

xi = (fzc)xc

yi = (fzc)ycHere camera coordinatesequal world coordinates

opticalaxis

ray

3D from Stereo

left image right image

3D point

disparity the difference in image location of the same 3Dpoint when projected under perspective to two different cameras

d = xleft - xright

Depth Perception from StereoSimple Model Parallel Optic Axes

f

f

L

R

camera

baselinecamera

b

P=(xz)

Z

X

image plane

xl

xr

z

z xf xl

=

x-b

z x-bf xr

= z y yf yl yr

= =y-axis is

perpendicularto the page

Resultant Depth Calculation

For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

z = fb (xl - xr) = fbd

x = xlzf or b + xrzf

y = ylzf or yrzf

This method ofdetermining depthfrom disparity is called triangulation

Finding Correspondences

bull If the correspondence is correct triangulation works VERY well

bull But correspondence finding is not perfectly solved (What methods have we studied)

bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

deg deg

3 Main Matching Methods

1 Cross correlation using small windows

2 Symbolic feature matching usually using segmentscorners

3 Use the newer interest operators ie SIFT

dense

sparse

sparse

Epipolar Geometry Constraint1 Normal Pair of Images

x

y1

y2

z1 z2

C1 C2b

P

P1 P2

epipolarplane

The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

The match for P1 (or P2) in the other image must lie on the same epipolar line

Epipolar GeometryGeneral Case

P

P1

P2

y1

y2

x1

x2

e1

e2

C1

C2

Constraints

P

e1

e2

C1

C2

1 Epipolar Constraint Matching points lie on corresponding epipolar lines

2 Ordering Constraint Usually in the same order across the lines

Q

Structured Light

3D data can also be derived using

bull a single camera

bull a light source that can produce stripe(s) on the 3D object

lightsource

camera

light stripe

Structured Light3D Computation

3D data can also be derived using

bull a single camera

bull a light source that can produce stripe(s) on the 3D object

lightsource

x axisf

(xacuteyacutef)

3D point(x y z)

b

b[x y z] = --------------- [xacute yacute f] f cot - xacute

(000)

3D image

Depth from Multiple Light Stripes

What are these objects

Our (former) System4-camera light-striping stereo

projector

rotationtable

cameras

3Dobject

Camera Model Recall there are 5 Different Frames of Reference

bull Object

bull World

bull Camera

bull Real Image

bull Pixel Image

yc

xc

zc

zwC

Wyw

xw

A

a

xf

yf

xp

yp

zppyramidobject

image

The Camera Model

How do we get an image point IP from a world point P

c11 c12 c13 c14

c21 c22 c23 c24

c31 c32 c33 1

s IPr

s IPc

s

Px

Py

Pz

1

=

imagepoint

camera matrix C worldpoint

Whatrsquos in C

The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

1 CP = T R WP2 FP = (f) CP

s FPx

s FPy

s FPz

s

1 0 0 00 1 0 00 0 1 00 0 1f 0

CPx

CPy

CPz

1

=

perspectivetransformation

imagepoint

3D point incamera

coordinates

Why is there not a scale factor here

Camera Calibration

bull In order work in 3D we need to know the parameters of the particular camera setup

bull Solving for the camera parameters is called calibration

yw

xwzw

W

yc

xc

zc

C

bull intrinsic parameters are of the camera device

bull extrinsic parameters are where the camera sits in the world

Intrinsic Parameters

bull principal point (u0v0)

bull scale factors (dxdy)

bull aspect ratio distortion factor

bull focal length f

bull lens distortion factor (models radial lens distortion)

C

(u0v0)

f

Extrinsic Parameters

bull translation parameters t = [tx ty tz]

bull rotation matrix

r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

R = Are there reallynine parameters

Calibration Object

The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

The Tsai Procedure

bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

bull Several images are taken of the calibration object yielding point correspondences at different distances

bull Tsairsquos algorithm requires n gt 5 correspondences

(xi yi zi) (ui vi)) | i = 1hellipn

between (real) image points and 3D points

bull Lots of details in Chapter 13

We use the camera parameters of each camera for general

stereoP

P1=(r1c1)P2=(r2c2)

y1

y2

x1

x2

e1

e2

B

C

For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

Direct solution uses 3 equations wonrsquot give reliable results

Solve by computing the closestapproach of the two skew rays

V

If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

P1

Q1

Psolve forshortest

V = (P1 + a1u1) ndash (Q1 + a2u2)

(P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

u1

u2

  • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
  • 3D Shape from X means getting 3D coordinates from different methods
  • Perspective Imaging Model 1D
  • Perspective in 2D (Simplified)
  • 3D from Stereo
  • Depth Perception from Stereo Simple Model Parallel Optic Axes
  • Resultant Depth Calculation
  • Finding Correspondences
  • 3 Main Matching Methods
  • Epipolar Geometry Constraint 1 Normal Pair of Images
  • Epipolar Geometry General Case
  • Constraints
  • Structured Light
  • Structured Light 3D Computation
  • Depth from Multiple Light Stripes
  • Our (former) System 4-camera light-striping stereo
  • Camera Model Recall there are 5 Different Frames of Reference
  • The Camera Model
  • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
  • Camera Calibration
  • Intrinsic Parameters
  • Extrinsic Parameters
  • Calibration Object
  • The Tsai Procedure
  • We use the camera parameters of each camera for general stereo
  • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
  • Solve by computing the closest approach of the two skew rays
  • Slide 28
  • Slide 29
  • Slide 30
  • Slide 31
  • Slide 32
  • Slide 33
  • Slide 34
  • Slide 35
  • Slide 36
  • Slide 37
  • Slide 38
  • Slide 39
  • Slide 40

    3D Shape from Xmeans getting 3D coordinates

    from different methods

    bull shadingbull silhouettebull texture

    bull stereo bull light stripingbull motion

    mainly research

    used in practice

    Perspective Imaging Model 1D

    xi

    xf

    f

    This is the axis of the real image plane

    O O is the center of projection

    This is the axis of the frontimage plane which we usezc

    xc

    xi xc

    f zc

    =

    camera lens

    3D objectpoint

    B

    D

    E

    image of pointB in front image

    real imagepoint

    Perspective in 2D(Simplified)

    P=(xcyczc) =(xwywzw)

    3D object point

    xc

    yc

    zw=zc

    yi

    Yc

    Xc

    Zc

    xi

    F f

    cameraPacute=(xiyif)

    xi xc

    f zc

    yi yc

    f zc

    =

    =

    xi = (fzc)xc

    yi = (fzc)ycHere camera coordinatesequal world coordinates

    opticalaxis

    ray

    3D from Stereo

    left image right image

    3D point

    disparity the difference in image location of the same 3Dpoint when projected under perspective to two different cameras

    d = xleft - xright

    Depth Perception from StereoSimple Model Parallel Optic Axes

    f

    f

    L

    R

    camera

    baselinecamera

    b

    P=(xz)

    Z

    X

    image plane

    xl

    xr

    z

    z xf xl

    =

    x-b

    z x-bf xr

    = z y yf yl yr

    = =y-axis is

    perpendicularto the page

    Resultant Depth Calculation

    For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

    z = fb (xl - xr) = fbd

    x = xlzf or b + xrzf

    y = ylzf or yrzf

    This method ofdetermining depthfrom disparity is called triangulation

    Finding Correspondences

    bull If the correspondence is correct triangulation works VERY well

    bull But correspondence finding is not perfectly solved (What methods have we studied)

    bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

    deg deg

    3 Main Matching Methods

    1 Cross correlation using small windows

    2 Symbolic feature matching usually using segmentscorners

    3 Use the newer interest operators ie SIFT

    dense

    sparse

    sparse

    Epipolar Geometry Constraint1 Normal Pair of Images

    x

    y1

    y2

    z1 z2

    C1 C2b

    P

    P1 P2

    epipolarplane

    The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

    The match for P1 (or P2) in the other image must lie on the same epipolar line

    Epipolar GeometryGeneral Case

    P

    P1

    P2

    y1

    y2

    x1

    x2

    e1

    e2

    C1

    C2

    Constraints

    P

    e1

    e2

    C1

    C2

    1 Epipolar Constraint Matching points lie on corresponding epipolar lines

    2 Ordering Constraint Usually in the same order across the lines

    Q

    Structured Light

    3D data can also be derived using

    bull a single camera

    bull a light source that can produce stripe(s) on the 3D object

    lightsource

    camera

    light stripe

    Structured Light3D Computation

    3D data can also be derived using

    bull a single camera

    bull a light source that can produce stripe(s) on the 3D object

    lightsource

    x axisf

    (xacuteyacutef)

    3D point(x y z)

    b

    b[x y z] = --------------- [xacute yacute f] f cot - xacute

    (000)

    3D image

    Depth from Multiple Light Stripes

    What are these objects

    Our (former) System4-camera light-striping stereo

    projector

    rotationtable

    cameras

    3Dobject

    Camera Model Recall there are 5 Different Frames of Reference

    bull Object

    bull World

    bull Camera

    bull Real Image

    bull Pixel Image

    yc

    xc

    zc

    zwC

    Wyw

    xw

    A

    a

    xf

    yf

    xp

    yp

    zppyramidobject

    image

    The Camera Model

    How do we get an image point IP from a world point P

    c11 c12 c13 c14

    c21 c22 c23 c24

    c31 c32 c33 1

    s IPr

    s IPc

    s

    Px

    Py

    Pz

    1

    =

    imagepoint

    camera matrix C worldpoint

    Whatrsquos in C

    The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

    1 CP = T R WP2 FP = (f) CP

    s FPx

    s FPy

    s FPz

    s

    1 0 0 00 1 0 00 0 1 00 0 1f 0

    CPx

    CPy

    CPz

    1

    =

    perspectivetransformation

    imagepoint

    3D point incamera

    coordinates

    Why is there not a scale factor here

    Camera Calibration

    bull In order work in 3D we need to know the parameters of the particular camera setup

    bull Solving for the camera parameters is called calibration

    yw

    xwzw

    W

    yc

    xc

    zc

    C

    bull intrinsic parameters are of the camera device

    bull extrinsic parameters are where the camera sits in the world

    Intrinsic Parameters

    bull principal point (u0v0)

    bull scale factors (dxdy)

    bull aspect ratio distortion factor

    bull focal length f

    bull lens distortion factor (models radial lens distortion)

    C

    (u0v0)

    f

    Extrinsic Parameters

    bull translation parameters t = [tx ty tz]

    bull rotation matrix

    r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

    R = Are there reallynine parameters

    Calibration Object

    The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

    The Tsai Procedure

    bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

    bull Several images are taken of the calibration object yielding point correspondences at different distances

    bull Tsairsquos algorithm requires n gt 5 correspondences

    (xi yi zi) (ui vi)) | i = 1hellipn

    between (real) image points and 3D points

    bull Lots of details in Chapter 13

    We use the camera parameters of each camera for general

    stereoP

    P1=(r1c1)P2=(r2c2)

    y1

    y2

    x1

    x2

    e1

    e2

    B

    C

    For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

    1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

    r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

    r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

    Direct solution uses 3 equations wonrsquot give reliable results

    Solve by computing the closestapproach of the two skew rays

    V

    If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

    P1

    Q1

    Psolve forshortest

    V = (P1 + a1u1) ndash (Q1 + a2u2)

    (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

    u1

    u2

    • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
    • 3D Shape from X means getting 3D coordinates from different methods
    • Perspective Imaging Model 1D
    • Perspective in 2D (Simplified)
    • 3D from Stereo
    • Depth Perception from Stereo Simple Model Parallel Optic Axes
    • Resultant Depth Calculation
    • Finding Correspondences
    • 3 Main Matching Methods
    • Epipolar Geometry Constraint 1 Normal Pair of Images
    • Epipolar Geometry General Case
    • Constraints
    • Structured Light
    • Structured Light 3D Computation
    • Depth from Multiple Light Stripes
    • Our (former) System 4-camera light-striping stereo
    • Camera Model Recall there are 5 Different Frames of Reference
    • The Camera Model
    • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
    • Camera Calibration
    • Intrinsic Parameters
    • Extrinsic Parameters
    • Calibration Object
    • The Tsai Procedure
    • We use the camera parameters of each camera for general stereo
    • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
    • Solve by computing the closest approach of the two skew rays
    • Slide 28
    • Slide 29
    • Slide 30
    • Slide 31
    • Slide 32
    • Slide 33
    • Slide 34
    • Slide 35
    • Slide 36
    • Slide 37
    • Slide 38
    • Slide 39
    • Slide 40

      Perspective Imaging Model 1D

      xi

      xf

      f

      This is the axis of the real image plane

      O O is the center of projection

      This is the axis of the frontimage plane which we usezc

      xc

      xi xc

      f zc

      =

      camera lens

      3D objectpoint

      B

      D

      E

      image of pointB in front image

      real imagepoint

      Perspective in 2D(Simplified)

      P=(xcyczc) =(xwywzw)

      3D object point

      xc

      yc

      zw=zc

      yi

      Yc

      Xc

      Zc

      xi

      F f

      cameraPacute=(xiyif)

      xi xc

      f zc

      yi yc

      f zc

      =

      =

      xi = (fzc)xc

      yi = (fzc)ycHere camera coordinatesequal world coordinates

      opticalaxis

      ray

      3D from Stereo

      left image right image

      3D point

      disparity the difference in image location of the same 3Dpoint when projected under perspective to two different cameras

      d = xleft - xright

      Depth Perception from StereoSimple Model Parallel Optic Axes

      f

      f

      L

      R

      camera

      baselinecamera

      b

      P=(xz)

      Z

      X

      image plane

      xl

      xr

      z

      z xf xl

      =

      x-b

      z x-bf xr

      = z y yf yl yr

      = =y-axis is

      perpendicularto the page

      Resultant Depth Calculation

      For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

      z = fb (xl - xr) = fbd

      x = xlzf or b + xrzf

      y = ylzf or yrzf

      This method ofdetermining depthfrom disparity is called triangulation

      Finding Correspondences

      bull If the correspondence is correct triangulation works VERY well

      bull But correspondence finding is not perfectly solved (What methods have we studied)

      bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

      deg deg

      3 Main Matching Methods

      1 Cross correlation using small windows

      2 Symbolic feature matching usually using segmentscorners

      3 Use the newer interest operators ie SIFT

      dense

      sparse

      sparse

      Epipolar Geometry Constraint1 Normal Pair of Images

      x

      y1

      y2

      z1 z2

      C1 C2b

      P

      P1 P2

      epipolarplane

      The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

      The match for P1 (or P2) in the other image must lie on the same epipolar line

      Epipolar GeometryGeneral Case

      P

      P1

      P2

      y1

      y2

      x1

      x2

      e1

      e2

      C1

      C2

      Constraints

      P

      e1

      e2

      C1

      C2

      1 Epipolar Constraint Matching points lie on corresponding epipolar lines

      2 Ordering Constraint Usually in the same order across the lines

      Q

      Structured Light

      3D data can also be derived using

      bull a single camera

      bull a light source that can produce stripe(s) on the 3D object

      lightsource

      camera

      light stripe

      Structured Light3D Computation

      3D data can also be derived using

      bull a single camera

      bull a light source that can produce stripe(s) on the 3D object

      lightsource

      x axisf

      (xacuteyacutef)

      3D point(x y z)

      b

      b[x y z] = --------------- [xacute yacute f] f cot - xacute

      (000)

      3D image

      Depth from Multiple Light Stripes

      What are these objects

      Our (former) System4-camera light-striping stereo

      projector

      rotationtable

      cameras

      3Dobject

      Camera Model Recall there are 5 Different Frames of Reference

      bull Object

      bull World

      bull Camera

      bull Real Image

      bull Pixel Image

      yc

      xc

      zc

      zwC

      Wyw

      xw

      A

      a

      xf

      yf

      xp

      yp

      zppyramidobject

      image

      The Camera Model

      How do we get an image point IP from a world point P

      c11 c12 c13 c14

      c21 c22 c23 c24

      c31 c32 c33 1

      s IPr

      s IPc

      s

      Px

      Py

      Pz

      1

      =

      imagepoint

      camera matrix C worldpoint

      Whatrsquos in C

      The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

      1 CP = T R WP2 FP = (f) CP

      s FPx

      s FPy

      s FPz

      s

      1 0 0 00 1 0 00 0 1 00 0 1f 0

      CPx

      CPy

      CPz

      1

      =

      perspectivetransformation

      imagepoint

      3D point incamera

      coordinates

      Why is there not a scale factor here

      Camera Calibration

      bull In order work in 3D we need to know the parameters of the particular camera setup

      bull Solving for the camera parameters is called calibration

      yw

      xwzw

      W

      yc

      xc

      zc

      C

      bull intrinsic parameters are of the camera device

      bull extrinsic parameters are where the camera sits in the world

      Intrinsic Parameters

      bull principal point (u0v0)

      bull scale factors (dxdy)

      bull aspect ratio distortion factor

      bull focal length f

      bull lens distortion factor (models radial lens distortion)

      C

      (u0v0)

      f

      Extrinsic Parameters

      bull translation parameters t = [tx ty tz]

      bull rotation matrix

      r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

      R = Are there reallynine parameters

      Calibration Object

      The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

      The Tsai Procedure

      bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

      bull Several images are taken of the calibration object yielding point correspondences at different distances

      bull Tsairsquos algorithm requires n gt 5 correspondences

      (xi yi zi) (ui vi)) | i = 1hellipn

      between (real) image points and 3D points

      bull Lots of details in Chapter 13

      We use the camera parameters of each camera for general

      stereoP

      P1=(r1c1)P2=(r2c2)

      y1

      y2

      x1

      x2

      e1

      e2

      B

      C

      For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

      1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

      r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

      r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

      Direct solution uses 3 equations wonrsquot give reliable results

      Solve by computing the closestapproach of the two skew rays

      V

      If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

      P1

      Q1

      Psolve forshortest

      V = (P1 + a1u1) ndash (Q1 + a2u2)

      (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

      u1

      u2

      • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
      • 3D Shape from X means getting 3D coordinates from different methods
      • Perspective Imaging Model 1D
      • Perspective in 2D (Simplified)
      • 3D from Stereo
      • Depth Perception from Stereo Simple Model Parallel Optic Axes
      • Resultant Depth Calculation
      • Finding Correspondences
      • 3 Main Matching Methods
      • Epipolar Geometry Constraint 1 Normal Pair of Images
      • Epipolar Geometry General Case
      • Constraints
      • Structured Light
      • Structured Light 3D Computation
      • Depth from Multiple Light Stripes
      • Our (former) System 4-camera light-striping stereo
      • Camera Model Recall there are 5 Different Frames of Reference
      • The Camera Model
      • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
      • Camera Calibration
      • Intrinsic Parameters
      • Extrinsic Parameters
      • Calibration Object
      • The Tsai Procedure
      • We use the camera parameters of each camera for general stereo
      • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
      • Solve by computing the closest approach of the two skew rays
      • Slide 28
      • Slide 29
      • Slide 30
      • Slide 31
      • Slide 32
      • Slide 33
      • Slide 34
      • Slide 35
      • Slide 36
      • Slide 37
      • Slide 38
      • Slide 39
      • Slide 40

        Perspective in 2D(Simplified)

        P=(xcyczc) =(xwywzw)

        3D object point

        xc

        yc

        zw=zc

        yi

        Yc

        Xc

        Zc

        xi

        F f

        cameraPacute=(xiyif)

        xi xc

        f zc

        yi yc

        f zc

        =

        =

        xi = (fzc)xc

        yi = (fzc)ycHere camera coordinatesequal world coordinates

        opticalaxis

        ray

        3D from Stereo

        left image right image

        3D point

        disparity the difference in image location of the same 3Dpoint when projected under perspective to two different cameras

        d = xleft - xright

        Depth Perception from StereoSimple Model Parallel Optic Axes

        f

        f

        L

        R

        camera

        baselinecamera

        b

        P=(xz)

        Z

        X

        image plane

        xl

        xr

        z

        z xf xl

        =

        x-b

        z x-bf xr

        = z y yf yl yr

        = =y-axis is

        perpendicularto the page

        Resultant Depth Calculation

        For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

        z = fb (xl - xr) = fbd

        x = xlzf or b + xrzf

        y = ylzf or yrzf

        This method ofdetermining depthfrom disparity is called triangulation

        Finding Correspondences

        bull If the correspondence is correct triangulation works VERY well

        bull But correspondence finding is not perfectly solved (What methods have we studied)

        bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

        deg deg

        3 Main Matching Methods

        1 Cross correlation using small windows

        2 Symbolic feature matching usually using segmentscorners

        3 Use the newer interest operators ie SIFT

        dense

        sparse

        sparse

        Epipolar Geometry Constraint1 Normal Pair of Images

        x

        y1

        y2

        z1 z2

        C1 C2b

        P

        P1 P2

        epipolarplane

        The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

        The match for P1 (or P2) in the other image must lie on the same epipolar line

        Epipolar GeometryGeneral Case

        P

        P1

        P2

        y1

        y2

        x1

        x2

        e1

        e2

        C1

        C2

        Constraints

        P

        e1

        e2

        C1

        C2

        1 Epipolar Constraint Matching points lie on corresponding epipolar lines

        2 Ordering Constraint Usually in the same order across the lines

        Q

        Structured Light

        3D data can also be derived using

        bull a single camera

        bull a light source that can produce stripe(s) on the 3D object

        lightsource

        camera

        light stripe

        Structured Light3D Computation

        3D data can also be derived using

        bull a single camera

        bull a light source that can produce stripe(s) on the 3D object

        lightsource

        x axisf

        (xacuteyacutef)

        3D point(x y z)

        b

        b[x y z] = --------------- [xacute yacute f] f cot - xacute

        (000)

        3D image

        Depth from Multiple Light Stripes

        What are these objects

        Our (former) System4-camera light-striping stereo

        projector

        rotationtable

        cameras

        3Dobject

        Camera Model Recall there are 5 Different Frames of Reference

        bull Object

        bull World

        bull Camera

        bull Real Image

        bull Pixel Image

        yc

        xc

        zc

        zwC

        Wyw

        xw

        A

        a

        xf

        yf

        xp

        yp

        zppyramidobject

        image

        The Camera Model

        How do we get an image point IP from a world point P

        c11 c12 c13 c14

        c21 c22 c23 c24

        c31 c32 c33 1

        s IPr

        s IPc

        s

        Px

        Py

        Pz

        1

        =

        imagepoint

        camera matrix C worldpoint

        Whatrsquos in C

        The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

        1 CP = T R WP2 FP = (f) CP

        s FPx

        s FPy

        s FPz

        s

        1 0 0 00 1 0 00 0 1 00 0 1f 0

        CPx

        CPy

        CPz

        1

        =

        perspectivetransformation

        imagepoint

        3D point incamera

        coordinates

        Why is there not a scale factor here

        Camera Calibration

        bull In order work in 3D we need to know the parameters of the particular camera setup

        bull Solving for the camera parameters is called calibration

        yw

        xwzw

        W

        yc

        xc

        zc

        C

        bull intrinsic parameters are of the camera device

        bull extrinsic parameters are where the camera sits in the world

        Intrinsic Parameters

        bull principal point (u0v0)

        bull scale factors (dxdy)

        bull aspect ratio distortion factor

        bull focal length f

        bull lens distortion factor (models radial lens distortion)

        C

        (u0v0)

        f

        Extrinsic Parameters

        bull translation parameters t = [tx ty tz]

        bull rotation matrix

        r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

        R = Are there reallynine parameters

        Calibration Object

        The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

        The Tsai Procedure

        bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

        bull Several images are taken of the calibration object yielding point correspondences at different distances

        bull Tsairsquos algorithm requires n gt 5 correspondences

        (xi yi zi) (ui vi)) | i = 1hellipn

        between (real) image points and 3D points

        bull Lots of details in Chapter 13

        We use the camera parameters of each camera for general

        stereoP

        P1=(r1c1)P2=(r2c2)

        y1

        y2

        x1

        x2

        e1

        e2

        B

        C

        For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

        1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

        r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

        r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

        Direct solution uses 3 equations wonrsquot give reliable results

        Solve by computing the closestapproach of the two skew rays

        V

        If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

        P1

        Q1

        Psolve forshortest

        V = (P1 + a1u1) ndash (Q1 + a2u2)

        (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

        u1

        u2

        • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
        • 3D Shape from X means getting 3D coordinates from different methods
        • Perspective Imaging Model 1D
        • Perspective in 2D (Simplified)
        • 3D from Stereo
        • Depth Perception from Stereo Simple Model Parallel Optic Axes
        • Resultant Depth Calculation
        • Finding Correspondences
        • 3 Main Matching Methods
        • Epipolar Geometry Constraint 1 Normal Pair of Images
        • Epipolar Geometry General Case
        • Constraints
        • Structured Light
        • Structured Light 3D Computation
        • Depth from Multiple Light Stripes
        • Our (former) System 4-camera light-striping stereo
        • Camera Model Recall there are 5 Different Frames of Reference
        • The Camera Model
        • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
        • Camera Calibration
        • Intrinsic Parameters
        • Extrinsic Parameters
        • Calibration Object
        • The Tsai Procedure
        • We use the camera parameters of each camera for general stereo
        • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
        • Solve by computing the closest approach of the two skew rays
        • Slide 28
        • Slide 29
        • Slide 30
        • Slide 31
        • Slide 32
        • Slide 33
        • Slide 34
        • Slide 35
        • Slide 36
        • Slide 37
        • Slide 38
        • Slide 39
        • Slide 40

          3D from Stereo

          left image right image

          3D point

          disparity the difference in image location of the same 3Dpoint when projected under perspective to two different cameras

          d = xleft - xright

          Depth Perception from StereoSimple Model Parallel Optic Axes

          f

          f

          L

          R

          camera

          baselinecamera

          b

          P=(xz)

          Z

          X

          image plane

          xl

          xr

          z

          z xf xl

          =

          x-b

          z x-bf xr

          = z y yf yl yr

          = =y-axis is

          perpendicularto the page

          Resultant Depth Calculation

          For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

          z = fb (xl - xr) = fbd

          x = xlzf or b + xrzf

          y = ylzf or yrzf

          This method ofdetermining depthfrom disparity is called triangulation

          Finding Correspondences

          bull If the correspondence is correct triangulation works VERY well

          bull But correspondence finding is not perfectly solved (What methods have we studied)

          bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

          deg deg

          3 Main Matching Methods

          1 Cross correlation using small windows

          2 Symbolic feature matching usually using segmentscorners

          3 Use the newer interest operators ie SIFT

          dense

          sparse

          sparse

          Epipolar Geometry Constraint1 Normal Pair of Images

          x

          y1

          y2

          z1 z2

          C1 C2b

          P

          P1 P2

          epipolarplane

          The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

          The match for P1 (or P2) in the other image must lie on the same epipolar line

          Epipolar GeometryGeneral Case

          P

          P1

          P2

          y1

          y2

          x1

          x2

          e1

          e2

          C1

          C2

          Constraints

          P

          e1

          e2

          C1

          C2

          1 Epipolar Constraint Matching points lie on corresponding epipolar lines

          2 Ordering Constraint Usually in the same order across the lines

          Q

          Structured Light

          3D data can also be derived using

          bull a single camera

          bull a light source that can produce stripe(s) on the 3D object

          lightsource

          camera

          light stripe

          Structured Light3D Computation

          3D data can also be derived using

          bull a single camera

          bull a light source that can produce stripe(s) on the 3D object

          lightsource

          x axisf

          (xacuteyacutef)

          3D point(x y z)

          b

          b[x y z] = --------------- [xacute yacute f] f cot - xacute

          (000)

          3D image

          Depth from Multiple Light Stripes

          What are these objects

          Our (former) System4-camera light-striping stereo

          projector

          rotationtable

          cameras

          3Dobject

          Camera Model Recall there are 5 Different Frames of Reference

          bull Object

          bull World

          bull Camera

          bull Real Image

          bull Pixel Image

          yc

          xc

          zc

          zwC

          Wyw

          xw

          A

          a

          xf

          yf

          xp

          yp

          zppyramidobject

          image

          The Camera Model

          How do we get an image point IP from a world point P

          c11 c12 c13 c14

          c21 c22 c23 c24

          c31 c32 c33 1

          s IPr

          s IPc

          s

          Px

          Py

          Pz

          1

          =

          imagepoint

          camera matrix C worldpoint

          Whatrsquos in C

          The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

          1 CP = T R WP2 FP = (f) CP

          s FPx

          s FPy

          s FPz

          s

          1 0 0 00 1 0 00 0 1 00 0 1f 0

          CPx

          CPy

          CPz

          1

          =

          perspectivetransformation

          imagepoint

          3D point incamera

          coordinates

          Why is there not a scale factor here

          Camera Calibration

          bull In order work in 3D we need to know the parameters of the particular camera setup

          bull Solving for the camera parameters is called calibration

          yw

          xwzw

          W

          yc

          xc

          zc

          C

          bull intrinsic parameters are of the camera device

          bull extrinsic parameters are where the camera sits in the world

          Intrinsic Parameters

          bull principal point (u0v0)

          bull scale factors (dxdy)

          bull aspect ratio distortion factor

          bull focal length f

          bull lens distortion factor (models radial lens distortion)

          C

          (u0v0)

          f

          Extrinsic Parameters

          bull translation parameters t = [tx ty tz]

          bull rotation matrix

          r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

          R = Are there reallynine parameters

          Calibration Object

          The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

          The Tsai Procedure

          bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

          bull Several images are taken of the calibration object yielding point correspondences at different distances

          bull Tsairsquos algorithm requires n gt 5 correspondences

          (xi yi zi) (ui vi)) | i = 1hellipn

          between (real) image points and 3D points

          bull Lots of details in Chapter 13

          We use the camera parameters of each camera for general

          stereoP

          P1=(r1c1)P2=(r2c2)

          y1

          y2

          x1

          x2

          e1

          e2

          B

          C

          For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

          1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

          r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

          r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

          Direct solution uses 3 equations wonrsquot give reliable results

          Solve by computing the closestapproach of the two skew rays

          V

          If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

          P1

          Q1

          Psolve forshortest

          V = (P1 + a1u1) ndash (Q1 + a2u2)

          (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

          u1

          u2

          • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
          • 3D Shape from X means getting 3D coordinates from different methods
          • Perspective Imaging Model 1D
          • Perspective in 2D (Simplified)
          • 3D from Stereo
          • Depth Perception from Stereo Simple Model Parallel Optic Axes
          • Resultant Depth Calculation
          • Finding Correspondences
          • 3 Main Matching Methods
          • Epipolar Geometry Constraint 1 Normal Pair of Images
          • Epipolar Geometry General Case
          • Constraints
          • Structured Light
          • Structured Light 3D Computation
          • Depth from Multiple Light Stripes
          • Our (former) System 4-camera light-striping stereo
          • Camera Model Recall there are 5 Different Frames of Reference
          • The Camera Model
          • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
          • Camera Calibration
          • Intrinsic Parameters
          • Extrinsic Parameters
          • Calibration Object
          • The Tsai Procedure
          • We use the camera parameters of each camera for general stereo
          • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
          • Solve by computing the closest approach of the two skew rays
          • Slide 28
          • Slide 29
          • Slide 30
          • Slide 31
          • Slide 32
          • Slide 33
          • Slide 34
          • Slide 35
          • Slide 36
          • Slide 37
          • Slide 38
          • Slide 39
          • Slide 40

            Depth Perception from StereoSimple Model Parallel Optic Axes

            f

            f

            L

            R

            camera

            baselinecamera

            b

            P=(xz)

            Z

            X

            image plane

            xl

            xr

            z

            z xf xl

            =

            x-b

            z x-bf xr

            = z y yf yl yr

            = =y-axis is

            perpendicularto the page

            Resultant Depth Calculation

            For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

            z = fb (xl - xr) = fbd

            x = xlzf or b + xrzf

            y = ylzf or yrzf

            This method ofdetermining depthfrom disparity is called triangulation

            Finding Correspondences

            bull If the correspondence is correct triangulation works VERY well

            bull But correspondence finding is not perfectly solved (What methods have we studied)

            bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

            deg deg

            3 Main Matching Methods

            1 Cross correlation using small windows

            2 Symbolic feature matching usually using segmentscorners

            3 Use the newer interest operators ie SIFT

            dense

            sparse

            sparse

            Epipolar Geometry Constraint1 Normal Pair of Images

            x

            y1

            y2

            z1 z2

            C1 C2b

            P

            P1 P2

            epipolarplane

            The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

            The match for P1 (or P2) in the other image must lie on the same epipolar line

            Epipolar GeometryGeneral Case

            P

            P1

            P2

            y1

            y2

            x1

            x2

            e1

            e2

            C1

            C2

            Constraints

            P

            e1

            e2

            C1

            C2

            1 Epipolar Constraint Matching points lie on corresponding epipolar lines

            2 Ordering Constraint Usually in the same order across the lines

            Q

            Structured Light

            3D data can also be derived using

            bull a single camera

            bull a light source that can produce stripe(s) on the 3D object

            lightsource

            camera

            light stripe

            Structured Light3D Computation

            3D data can also be derived using

            bull a single camera

            bull a light source that can produce stripe(s) on the 3D object

            lightsource

            x axisf

            (xacuteyacutef)

            3D point(x y z)

            b

            b[x y z] = --------------- [xacute yacute f] f cot - xacute

            (000)

            3D image

            Depth from Multiple Light Stripes

            What are these objects

            Our (former) System4-camera light-striping stereo

            projector

            rotationtable

            cameras

            3Dobject

            Camera Model Recall there are 5 Different Frames of Reference

            bull Object

            bull World

            bull Camera

            bull Real Image

            bull Pixel Image

            yc

            xc

            zc

            zwC

            Wyw

            xw

            A

            a

            xf

            yf

            xp

            yp

            zppyramidobject

            image

            The Camera Model

            How do we get an image point IP from a world point P

            c11 c12 c13 c14

            c21 c22 c23 c24

            c31 c32 c33 1

            s IPr

            s IPc

            s

            Px

            Py

            Pz

            1

            =

            imagepoint

            camera matrix C worldpoint

            Whatrsquos in C

            The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

            1 CP = T R WP2 FP = (f) CP

            s FPx

            s FPy

            s FPz

            s

            1 0 0 00 1 0 00 0 1 00 0 1f 0

            CPx

            CPy

            CPz

            1

            =

            perspectivetransformation

            imagepoint

            3D point incamera

            coordinates

            Why is there not a scale factor here

            Camera Calibration

            bull In order work in 3D we need to know the parameters of the particular camera setup

            bull Solving for the camera parameters is called calibration

            yw

            xwzw

            W

            yc

            xc

            zc

            C

            bull intrinsic parameters are of the camera device

            bull extrinsic parameters are where the camera sits in the world

            Intrinsic Parameters

            bull principal point (u0v0)

            bull scale factors (dxdy)

            bull aspect ratio distortion factor

            bull focal length f

            bull lens distortion factor (models radial lens distortion)

            C

            (u0v0)

            f

            Extrinsic Parameters

            bull translation parameters t = [tx ty tz]

            bull rotation matrix

            r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

            R = Are there reallynine parameters

            Calibration Object

            The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

            The Tsai Procedure

            bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

            bull Several images are taken of the calibration object yielding point correspondences at different distances

            bull Tsairsquos algorithm requires n gt 5 correspondences

            (xi yi zi) (ui vi)) | i = 1hellipn

            between (real) image points and 3D points

            bull Lots of details in Chapter 13

            We use the camera parameters of each camera for general

            stereoP

            P1=(r1c1)P2=(r2c2)

            y1

            y2

            x1

            x2

            e1

            e2

            B

            C

            For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

            1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

            r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

            r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

            Direct solution uses 3 equations wonrsquot give reliable results

            Solve by computing the closestapproach of the two skew rays

            V

            If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

            P1

            Q1

            Psolve forshortest

            V = (P1 + a1u1) ndash (Q1 + a2u2)

            (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

            u1

            u2

            • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
            • 3D Shape from X means getting 3D coordinates from different methods
            • Perspective Imaging Model 1D
            • Perspective in 2D (Simplified)
            • 3D from Stereo
            • Depth Perception from Stereo Simple Model Parallel Optic Axes
            • Resultant Depth Calculation
            • Finding Correspondences
            • 3 Main Matching Methods
            • Epipolar Geometry Constraint 1 Normal Pair of Images
            • Epipolar Geometry General Case
            • Constraints
            • Structured Light
            • Structured Light 3D Computation
            • Depth from Multiple Light Stripes
            • Our (former) System 4-camera light-striping stereo
            • Camera Model Recall there are 5 Different Frames of Reference
            • The Camera Model
            • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
            • Camera Calibration
            • Intrinsic Parameters
            • Extrinsic Parameters
            • Calibration Object
            • The Tsai Procedure
            • We use the camera parameters of each camera for general stereo
            • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
            • Solve by computing the closest approach of the two skew rays
            • Slide 28
            • Slide 29
            • Slide 30
            • Slide 31
            • Slide 32
            • Slide 33
            • Slide 34
            • Slide 35
            • Slide 36
            • Slide 37
            • Slide 38
            • Slide 39
            • Slide 40

              Resultant Depth Calculation

              For stereo cameras with parallel optical axes focal length fbaseline b corresponding image points (xlyl) and (xryr)with disparity d

              z = fb (xl - xr) = fbd

              x = xlzf or b + xrzf

              y = ylzf or yrzf

              This method ofdetermining depthfrom disparity is called triangulation

              Finding Correspondences

              bull If the correspondence is correct triangulation works VERY well

              bull But correspondence finding is not perfectly solved (What methods have we studied)

              bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

              deg deg

              3 Main Matching Methods

              1 Cross correlation using small windows

              2 Symbolic feature matching usually using segmentscorners

              3 Use the newer interest operators ie SIFT

              dense

              sparse

              sparse

              Epipolar Geometry Constraint1 Normal Pair of Images

              x

              y1

              y2

              z1 z2

              C1 C2b

              P

              P1 P2

              epipolarplane

              The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

              The match for P1 (or P2) in the other image must lie on the same epipolar line

              Epipolar GeometryGeneral Case

              P

              P1

              P2

              y1

              y2

              x1

              x2

              e1

              e2

              C1

              C2

              Constraints

              P

              e1

              e2

              C1

              C2

              1 Epipolar Constraint Matching points lie on corresponding epipolar lines

              2 Ordering Constraint Usually in the same order across the lines

              Q

              Structured Light

              3D data can also be derived using

              bull a single camera

              bull a light source that can produce stripe(s) on the 3D object

              lightsource

              camera

              light stripe

              Structured Light3D Computation

              3D data can also be derived using

              bull a single camera

              bull a light source that can produce stripe(s) on the 3D object

              lightsource

              x axisf

              (xacuteyacutef)

              3D point(x y z)

              b

              b[x y z] = --------------- [xacute yacute f] f cot - xacute

              (000)

              3D image

              Depth from Multiple Light Stripes

              What are these objects

              Our (former) System4-camera light-striping stereo

              projector

              rotationtable

              cameras

              3Dobject

              Camera Model Recall there are 5 Different Frames of Reference

              bull Object

              bull World

              bull Camera

              bull Real Image

              bull Pixel Image

              yc

              xc

              zc

              zwC

              Wyw

              xw

              A

              a

              xf

              yf

              xp

              yp

              zppyramidobject

              image

              The Camera Model

              How do we get an image point IP from a world point P

              c11 c12 c13 c14

              c21 c22 c23 c24

              c31 c32 c33 1

              s IPr

              s IPc

              s

              Px

              Py

              Pz

              1

              =

              imagepoint

              camera matrix C worldpoint

              Whatrsquos in C

              The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

              1 CP = T R WP2 FP = (f) CP

              s FPx

              s FPy

              s FPz

              s

              1 0 0 00 1 0 00 0 1 00 0 1f 0

              CPx

              CPy

              CPz

              1

              =

              perspectivetransformation

              imagepoint

              3D point incamera

              coordinates

              Why is there not a scale factor here

              Camera Calibration

              bull In order work in 3D we need to know the parameters of the particular camera setup

              bull Solving for the camera parameters is called calibration

              yw

              xwzw

              W

              yc

              xc

              zc

              C

              bull intrinsic parameters are of the camera device

              bull extrinsic parameters are where the camera sits in the world

              Intrinsic Parameters

              bull principal point (u0v0)

              bull scale factors (dxdy)

              bull aspect ratio distortion factor

              bull focal length f

              bull lens distortion factor (models radial lens distortion)

              C

              (u0v0)

              f

              Extrinsic Parameters

              bull translation parameters t = [tx ty tz]

              bull rotation matrix

              r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

              R = Are there reallynine parameters

              Calibration Object

              The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

              The Tsai Procedure

              bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

              bull Several images are taken of the calibration object yielding point correspondences at different distances

              bull Tsairsquos algorithm requires n gt 5 correspondences

              (xi yi zi) (ui vi)) | i = 1hellipn

              between (real) image points and 3D points

              bull Lots of details in Chapter 13

              We use the camera parameters of each camera for general

              stereoP

              P1=(r1c1)P2=(r2c2)

              y1

              y2

              x1

              x2

              e1

              e2

              B

              C

              For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

              1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

              r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

              r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

              Direct solution uses 3 equations wonrsquot give reliable results

              Solve by computing the closestapproach of the two skew rays

              V

              If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

              P1

              Q1

              Psolve forshortest

              V = (P1 + a1u1) ndash (Q1 + a2u2)

              (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

              u1

              u2

              • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
              • 3D Shape from X means getting 3D coordinates from different methods
              • Perspective Imaging Model 1D
              • Perspective in 2D (Simplified)
              • 3D from Stereo
              • Depth Perception from Stereo Simple Model Parallel Optic Axes
              • Resultant Depth Calculation
              • Finding Correspondences
              • 3 Main Matching Methods
              • Epipolar Geometry Constraint 1 Normal Pair of Images
              • Epipolar Geometry General Case
              • Constraints
              • Structured Light
              • Structured Light 3D Computation
              • Depth from Multiple Light Stripes
              • Our (former) System 4-camera light-striping stereo
              • Camera Model Recall there are 5 Different Frames of Reference
              • The Camera Model
              • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
              • Camera Calibration
              • Intrinsic Parameters
              • Extrinsic Parameters
              • Calibration Object
              • The Tsai Procedure
              • We use the camera parameters of each camera for general stereo
              • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
              • Solve by computing the closest approach of the two skew rays
              • Slide 28
              • Slide 29
              • Slide 30
              • Slide 31
              • Slide 32
              • Slide 33
              • Slide 34
              • Slide 35
              • Slide 36
              • Slide 37
              • Slide 38
              • Slide 39
              • Slide 40

                Finding Correspondences

                bull If the correspondence is correct triangulation works VERY well

                bull But correspondence finding is not perfectly solved (What methods have we studied)

                bull For some very specific applications it can be solved for those specific kind of images eg windshield of a car

                deg deg

                3 Main Matching Methods

                1 Cross correlation using small windows

                2 Symbolic feature matching usually using segmentscorners

                3 Use the newer interest operators ie SIFT

                dense

                sparse

                sparse

                Epipolar Geometry Constraint1 Normal Pair of Images

                x

                y1

                y2

                z1 z2

                C1 C2b

                P

                P1 P2

                epipolarplane

                The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

                The match for P1 (or P2) in the other image must lie on the same epipolar line

                Epipolar GeometryGeneral Case

                P

                P1

                P2

                y1

                y2

                x1

                x2

                e1

                e2

                C1

                C2

                Constraints

                P

                e1

                e2

                C1

                C2

                1 Epipolar Constraint Matching points lie on corresponding epipolar lines

                2 Ordering Constraint Usually in the same order across the lines

                Q

                Structured Light

                3D data can also be derived using

                bull a single camera

                bull a light source that can produce stripe(s) on the 3D object

                lightsource

                camera

                light stripe

                Structured Light3D Computation

                3D data can also be derived using

                bull a single camera

                bull a light source that can produce stripe(s) on the 3D object

                lightsource

                x axisf

                (xacuteyacutef)

                3D point(x y z)

                b

                b[x y z] = --------------- [xacute yacute f] f cot - xacute

                (000)

                3D image

                Depth from Multiple Light Stripes

                What are these objects

                Our (former) System4-camera light-striping stereo

                projector

                rotationtable

                cameras

                3Dobject

                Camera Model Recall there are 5 Different Frames of Reference

                bull Object

                bull World

                bull Camera

                bull Real Image

                bull Pixel Image

                yc

                xc

                zc

                zwC

                Wyw

                xw

                A

                a

                xf

                yf

                xp

                yp

                zppyramidobject

                image

                The Camera Model

                How do we get an image point IP from a world point P

                c11 c12 c13 c14

                c21 c22 c23 c24

                c31 c32 c33 1

                s IPr

                s IPc

                s

                Px

                Py

                Pz

                1

                =

                imagepoint

                camera matrix C worldpoint

                Whatrsquos in C

                The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                1 CP = T R WP2 FP = (f) CP

                s FPx

                s FPy

                s FPz

                s

                1 0 0 00 1 0 00 0 1 00 0 1f 0

                CPx

                CPy

                CPz

                1

                =

                perspectivetransformation

                imagepoint

                3D point incamera

                coordinates

                Why is there not a scale factor here

                Camera Calibration

                bull In order work in 3D we need to know the parameters of the particular camera setup

                bull Solving for the camera parameters is called calibration

                yw

                xwzw

                W

                yc

                xc

                zc

                C

                bull intrinsic parameters are of the camera device

                bull extrinsic parameters are where the camera sits in the world

                Intrinsic Parameters

                bull principal point (u0v0)

                bull scale factors (dxdy)

                bull aspect ratio distortion factor

                bull focal length f

                bull lens distortion factor (models radial lens distortion)

                C

                (u0v0)

                f

                Extrinsic Parameters

                bull translation parameters t = [tx ty tz]

                bull rotation matrix

                r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                R = Are there reallynine parameters

                Calibration Object

                The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                The Tsai Procedure

                bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                bull Several images are taken of the calibration object yielding point correspondences at different distances

                bull Tsairsquos algorithm requires n gt 5 correspondences

                (xi yi zi) (ui vi)) | i = 1hellipn

                between (real) image points and 3D points

                bull Lots of details in Chapter 13

                We use the camera parameters of each camera for general

                stereoP

                P1=(r1c1)P2=(r2c2)

                y1

                y2

                x1

                x2

                e1

                e2

                B

                C

                For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                Direct solution uses 3 equations wonrsquot give reliable results

                Solve by computing the closestapproach of the two skew rays

                V

                If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                P1

                Q1

                Psolve forshortest

                V = (P1 + a1u1) ndash (Q1 + a2u2)

                (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                u1

                u2

                • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                • 3D Shape from X means getting 3D coordinates from different methods
                • Perspective Imaging Model 1D
                • Perspective in 2D (Simplified)
                • 3D from Stereo
                • Depth Perception from Stereo Simple Model Parallel Optic Axes
                • Resultant Depth Calculation
                • Finding Correspondences
                • 3 Main Matching Methods
                • Epipolar Geometry Constraint 1 Normal Pair of Images
                • Epipolar Geometry General Case
                • Constraints
                • Structured Light
                • Structured Light 3D Computation
                • Depth from Multiple Light Stripes
                • Our (former) System 4-camera light-striping stereo
                • Camera Model Recall there are 5 Different Frames of Reference
                • The Camera Model
                • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                • Camera Calibration
                • Intrinsic Parameters
                • Extrinsic Parameters
                • Calibration Object
                • The Tsai Procedure
                • We use the camera parameters of each camera for general stereo
                • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                • Solve by computing the closest approach of the two skew rays
                • Slide 28
                • Slide 29
                • Slide 30
                • Slide 31
                • Slide 32
                • Slide 33
                • Slide 34
                • Slide 35
                • Slide 36
                • Slide 37
                • Slide 38
                • Slide 39
                • Slide 40

                  3 Main Matching Methods

                  1 Cross correlation using small windows

                  2 Symbolic feature matching usually using segmentscorners

                  3 Use the newer interest operators ie SIFT

                  dense

                  sparse

                  sparse

                  Epipolar Geometry Constraint1 Normal Pair of Images

                  x

                  y1

                  y2

                  z1 z2

                  C1 C2b

                  P

                  P1 P2

                  epipolarplane

                  The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

                  The match for P1 (or P2) in the other image must lie on the same epipolar line

                  Epipolar GeometryGeneral Case

                  P

                  P1

                  P2

                  y1

                  y2

                  x1

                  x2

                  e1

                  e2

                  C1

                  C2

                  Constraints

                  P

                  e1

                  e2

                  C1

                  C2

                  1 Epipolar Constraint Matching points lie on corresponding epipolar lines

                  2 Ordering Constraint Usually in the same order across the lines

                  Q

                  Structured Light

                  3D data can also be derived using

                  bull a single camera

                  bull a light source that can produce stripe(s) on the 3D object

                  lightsource

                  camera

                  light stripe

                  Structured Light3D Computation

                  3D data can also be derived using

                  bull a single camera

                  bull a light source that can produce stripe(s) on the 3D object

                  lightsource

                  x axisf

                  (xacuteyacutef)

                  3D point(x y z)

                  b

                  b[x y z] = --------------- [xacute yacute f] f cot - xacute

                  (000)

                  3D image

                  Depth from Multiple Light Stripes

                  What are these objects

                  Our (former) System4-camera light-striping stereo

                  projector

                  rotationtable

                  cameras

                  3Dobject

                  Camera Model Recall there are 5 Different Frames of Reference

                  bull Object

                  bull World

                  bull Camera

                  bull Real Image

                  bull Pixel Image

                  yc

                  xc

                  zc

                  zwC

                  Wyw

                  xw

                  A

                  a

                  xf

                  yf

                  xp

                  yp

                  zppyramidobject

                  image

                  The Camera Model

                  How do we get an image point IP from a world point P

                  c11 c12 c13 c14

                  c21 c22 c23 c24

                  c31 c32 c33 1

                  s IPr

                  s IPc

                  s

                  Px

                  Py

                  Pz

                  1

                  =

                  imagepoint

                  camera matrix C worldpoint

                  Whatrsquos in C

                  The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                  1 CP = T R WP2 FP = (f) CP

                  s FPx

                  s FPy

                  s FPz

                  s

                  1 0 0 00 1 0 00 0 1 00 0 1f 0

                  CPx

                  CPy

                  CPz

                  1

                  =

                  perspectivetransformation

                  imagepoint

                  3D point incamera

                  coordinates

                  Why is there not a scale factor here

                  Camera Calibration

                  bull In order work in 3D we need to know the parameters of the particular camera setup

                  bull Solving for the camera parameters is called calibration

                  yw

                  xwzw

                  W

                  yc

                  xc

                  zc

                  C

                  bull intrinsic parameters are of the camera device

                  bull extrinsic parameters are where the camera sits in the world

                  Intrinsic Parameters

                  bull principal point (u0v0)

                  bull scale factors (dxdy)

                  bull aspect ratio distortion factor

                  bull focal length f

                  bull lens distortion factor (models radial lens distortion)

                  C

                  (u0v0)

                  f

                  Extrinsic Parameters

                  bull translation parameters t = [tx ty tz]

                  bull rotation matrix

                  r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                  R = Are there reallynine parameters

                  Calibration Object

                  The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                  The Tsai Procedure

                  bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                  bull Several images are taken of the calibration object yielding point correspondences at different distances

                  bull Tsairsquos algorithm requires n gt 5 correspondences

                  (xi yi zi) (ui vi)) | i = 1hellipn

                  between (real) image points and 3D points

                  bull Lots of details in Chapter 13

                  We use the camera parameters of each camera for general

                  stereoP

                  P1=(r1c1)P2=(r2c2)

                  y1

                  y2

                  x1

                  x2

                  e1

                  e2

                  B

                  C

                  For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                  1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                  r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                  r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                  Direct solution uses 3 equations wonrsquot give reliable results

                  Solve by computing the closestapproach of the two skew rays

                  V

                  If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                  P1

                  Q1

                  Psolve forshortest

                  V = (P1 + a1u1) ndash (Q1 + a2u2)

                  (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                  u1

                  u2

                  • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                  • 3D Shape from X means getting 3D coordinates from different methods
                  • Perspective Imaging Model 1D
                  • Perspective in 2D (Simplified)
                  • 3D from Stereo
                  • Depth Perception from Stereo Simple Model Parallel Optic Axes
                  • Resultant Depth Calculation
                  • Finding Correspondences
                  • 3 Main Matching Methods
                  • Epipolar Geometry Constraint 1 Normal Pair of Images
                  • Epipolar Geometry General Case
                  • Constraints
                  • Structured Light
                  • Structured Light 3D Computation
                  • Depth from Multiple Light Stripes
                  • Our (former) System 4-camera light-striping stereo
                  • Camera Model Recall there are 5 Different Frames of Reference
                  • The Camera Model
                  • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                  • Camera Calibration
                  • Intrinsic Parameters
                  • Extrinsic Parameters
                  • Calibration Object
                  • The Tsai Procedure
                  • We use the camera parameters of each camera for general stereo
                  • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                  • Solve by computing the closest approach of the two skew rays
                  • Slide 28
                  • Slide 29
                  • Slide 30
                  • Slide 31
                  • Slide 32
                  • Slide 33
                  • Slide 34
                  • Slide 35
                  • Slide 36
                  • Slide 37
                  • Slide 38
                  • Slide 39
                  • Slide 40

                    Epipolar Geometry Constraint1 Normal Pair of Images

                    x

                    y1

                    y2

                    z1 z2

                    C1 C2b

                    P

                    P1 P2

                    epipolarplane

                    The epipolar plane cuts through the image plane(s)forming 2 epipolar lines

                    The match for P1 (or P2) in the other image must lie on the same epipolar line

                    Epipolar GeometryGeneral Case

                    P

                    P1

                    P2

                    y1

                    y2

                    x1

                    x2

                    e1

                    e2

                    C1

                    C2

                    Constraints

                    P

                    e1

                    e2

                    C1

                    C2

                    1 Epipolar Constraint Matching points lie on corresponding epipolar lines

                    2 Ordering Constraint Usually in the same order across the lines

                    Q

                    Structured Light

                    3D data can also be derived using

                    bull a single camera

                    bull a light source that can produce stripe(s) on the 3D object

                    lightsource

                    camera

                    light stripe

                    Structured Light3D Computation

                    3D data can also be derived using

                    bull a single camera

                    bull a light source that can produce stripe(s) on the 3D object

                    lightsource

                    x axisf

                    (xacuteyacutef)

                    3D point(x y z)

                    b

                    b[x y z] = --------------- [xacute yacute f] f cot - xacute

                    (000)

                    3D image

                    Depth from Multiple Light Stripes

                    What are these objects

                    Our (former) System4-camera light-striping stereo

                    projector

                    rotationtable

                    cameras

                    3Dobject

                    Camera Model Recall there are 5 Different Frames of Reference

                    bull Object

                    bull World

                    bull Camera

                    bull Real Image

                    bull Pixel Image

                    yc

                    xc

                    zc

                    zwC

                    Wyw

                    xw

                    A

                    a

                    xf

                    yf

                    xp

                    yp

                    zppyramidobject

                    image

                    The Camera Model

                    How do we get an image point IP from a world point P

                    c11 c12 c13 c14

                    c21 c22 c23 c24

                    c31 c32 c33 1

                    s IPr

                    s IPc

                    s

                    Px

                    Py

                    Pz

                    1

                    =

                    imagepoint

                    camera matrix C worldpoint

                    Whatrsquos in C

                    The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                    1 CP = T R WP2 FP = (f) CP

                    s FPx

                    s FPy

                    s FPz

                    s

                    1 0 0 00 1 0 00 0 1 00 0 1f 0

                    CPx

                    CPy

                    CPz

                    1

                    =

                    perspectivetransformation

                    imagepoint

                    3D point incamera

                    coordinates

                    Why is there not a scale factor here

                    Camera Calibration

                    bull In order work in 3D we need to know the parameters of the particular camera setup

                    bull Solving for the camera parameters is called calibration

                    yw

                    xwzw

                    W

                    yc

                    xc

                    zc

                    C

                    bull intrinsic parameters are of the camera device

                    bull extrinsic parameters are where the camera sits in the world

                    Intrinsic Parameters

                    bull principal point (u0v0)

                    bull scale factors (dxdy)

                    bull aspect ratio distortion factor

                    bull focal length f

                    bull lens distortion factor (models radial lens distortion)

                    C

                    (u0v0)

                    f

                    Extrinsic Parameters

                    bull translation parameters t = [tx ty tz]

                    bull rotation matrix

                    r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                    R = Are there reallynine parameters

                    Calibration Object

                    The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                    The Tsai Procedure

                    bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                    bull Several images are taken of the calibration object yielding point correspondences at different distances

                    bull Tsairsquos algorithm requires n gt 5 correspondences

                    (xi yi zi) (ui vi)) | i = 1hellipn

                    between (real) image points and 3D points

                    bull Lots of details in Chapter 13

                    We use the camera parameters of each camera for general

                    stereoP

                    P1=(r1c1)P2=(r2c2)

                    y1

                    y2

                    x1

                    x2

                    e1

                    e2

                    B

                    C

                    For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                    1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                    r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                    r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                    Direct solution uses 3 equations wonrsquot give reliable results

                    Solve by computing the closestapproach of the two skew rays

                    V

                    If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                    P1

                    Q1

                    Psolve forshortest

                    V = (P1 + a1u1) ndash (Q1 + a2u2)

                    (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                    u1

                    u2

                    • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                    • 3D Shape from X means getting 3D coordinates from different methods
                    • Perspective Imaging Model 1D
                    • Perspective in 2D (Simplified)
                    • 3D from Stereo
                    • Depth Perception from Stereo Simple Model Parallel Optic Axes
                    • Resultant Depth Calculation
                    • Finding Correspondences
                    • 3 Main Matching Methods
                    • Epipolar Geometry Constraint 1 Normal Pair of Images
                    • Epipolar Geometry General Case
                    • Constraints
                    • Structured Light
                    • Structured Light 3D Computation
                    • Depth from Multiple Light Stripes
                    • Our (former) System 4-camera light-striping stereo
                    • Camera Model Recall there are 5 Different Frames of Reference
                    • The Camera Model
                    • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                    • Camera Calibration
                    • Intrinsic Parameters
                    • Extrinsic Parameters
                    • Calibration Object
                    • The Tsai Procedure
                    • We use the camera parameters of each camera for general stereo
                    • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                    • Solve by computing the closest approach of the two skew rays
                    • Slide 28
                    • Slide 29
                    • Slide 30
                    • Slide 31
                    • Slide 32
                    • Slide 33
                    • Slide 34
                    • Slide 35
                    • Slide 36
                    • Slide 37
                    • Slide 38
                    • Slide 39
                    • Slide 40

                      Epipolar GeometryGeneral Case

                      P

                      P1

                      P2

                      y1

                      y2

                      x1

                      x2

                      e1

                      e2

                      C1

                      C2

                      Constraints

                      P

                      e1

                      e2

                      C1

                      C2

                      1 Epipolar Constraint Matching points lie on corresponding epipolar lines

                      2 Ordering Constraint Usually in the same order across the lines

                      Q

                      Structured Light

                      3D data can also be derived using

                      bull a single camera

                      bull a light source that can produce stripe(s) on the 3D object

                      lightsource

                      camera

                      light stripe

                      Structured Light3D Computation

                      3D data can also be derived using

                      bull a single camera

                      bull a light source that can produce stripe(s) on the 3D object

                      lightsource

                      x axisf

                      (xacuteyacutef)

                      3D point(x y z)

                      b

                      b[x y z] = --------------- [xacute yacute f] f cot - xacute

                      (000)

                      3D image

                      Depth from Multiple Light Stripes

                      What are these objects

                      Our (former) System4-camera light-striping stereo

                      projector

                      rotationtable

                      cameras

                      3Dobject

                      Camera Model Recall there are 5 Different Frames of Reference

                      bull Object

                      bull World

                      bull Camera

                      bull Real Image

                      bull Pixel Image

                      yc

                      xc

                      zc

                      zwC

                      Wyw

                      xw

                      A

                      a

                      xf

                      yf

                      xp

                      yp

                      zppyramidobject

                      image

                      The Camera Model

                      How do we get an image point IP from a world point P

                      c11 c12 c13 c14

                      c21 c22 c23 c24

                      c31 c32 c33 1

                      s IPr

                      s IPc

                      s

                      Px

                      Py

                      Pz

                      1

                      =

                      imagepoint

                      camera matrix C worldpoint

                      Whatrsquos in C

                      The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                      1 CP = T R WP2 FP = (f) CP

                      s FPx

                      s FPy

                      s FPz

                      s

                      1 0 0 00 1 0 00 0 1 00 0 1f 0

                      CPx

                      CPy

                      CPz

                      1

                      =

                      perspectivetransformation

                      imagepoint

                      3D point incamera

                      coordinates

                      Why is there not a scale factor here

                      Camera Calibration

                      bull In order work in 3D we need to know the parameters of the particular camera setup

                      bull Solving for the camera parameters is called calibration

                      yw

                      xwzw

                      W

                      yc

                      xc

                      zc

                      C

                      bull intrinsic parameters are of the camera device

                      bull extrinsic parameters are where the camera sits in the world

                      Intrinsic Parameters

                      bull principal point (u0v0)

                      bull scale factors (dxdy)

                      bull aspect ratio distortion factor

                      bull focal length f

                      bull lens distortion factor (models radial lens distortion)

                      C

                      (u0v0)

                      f

                      Extrinsic Parameters

                      bull translation parameters t = [tx ty tz]

                      bull rotation matrix

                      r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                      R = Are there reallynine parameters

                      Calibration Object

                      The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                      The Tsai Procedure

                      bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                      bull Several images are taken of the calibration object yielding point correspondences at different distances

                      bull Tsairsquos algorithm requires n gt 5 correspondences

                      (xi yi zi) (ui vi)) | i = 1hellipn

                      between (real) image points and 3D points

                      bull Lots of details in Chapter 13

                      We use the camera parameters of each camera for general

                      stereoP

                      P1=(r1c1)P2=(r2c2)

                      y1

                      y2

                      x1

                      x2

                      e1

                      e2

                      B

                      C

                      For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                      1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                      r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                      r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                      Direct solution uses 3 equations wonrsquot give reliable results

                      Solve by computing the closestapproach of the two skew rays

                      V

                      If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                      P1

                      Q1

                      Psolve forshortest

                      V = (P1 + a1u1) ndash (Q1 + a2u2)

                      (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                      u1

                      u2

                      • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                      • 3D Shape from X means getting 3D coordinates from different methods
                      • Perspective Imaging Model 1D
                      • Perspective in 2D (Simplified)
                      • 3D from Stereo
                      • Depth Perception from Stereo Simple Model Parallel Optic Axes
                      • Resultant Depth Calculation
                      • Finding Correspondences
                      • 3 Main Matching Methods
                      • Epipolar Geometry Constraint 1 Normal Pair of Images
                      • Epipolar Geometry General Case
                      • Constraints
                      • Structured Light
                      • Structured Light 3D Computation
                      • Depth from Multiple Light Stripes
                      • Our (former) System 4-camera light-striping stereo
                      • Camera Model Recall there are 5 Different Frames of Reference
                      • The Camera Model
                      • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                      • Camera Calibration
                      • Intrinsic Parameters
                      • Extrinsic Parameters
                      • Calibration Object
                      • The Tsai Procedure
                      • We use the camera parameters of each camera for general stereo
                      • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                      • Solve by computing the closest approach of the two skew rays
                      • Slide 28
                      • Slide 29
                      • Slide 30
                      • Slide 31
                      • Slide 32
                      • Slide 33
                      • Slide 34
                      • Slide 35
                      • Slide 36
                      • Slide 37
                      • Slide 38
                      • Slide 39
                      • Slide 40

                        Constraints

                        P

                        e1

                        e2

                        C1

                        C2

                        1 Epipolar Constraint Matching points lie on corresponding epipolar lines

                        2 Ordering Constraint Usually in the same order across the lines

                        Q

                        Structured Light

                        3D data can also be derived using

                        bull a single camera

                        bull a light source that can produce stripe(s) on the 3D object

                        lightsource

                        camera

                        light stripe

                        Structured Light3D Computation

                        3D data can also be derived using

                        bull a single camera

                        bull a light source that can produce stripe(s) on the 3D object

                        lightsource

                        x axisf

                        (xacuteyacutef)

                        3D point(x y z)

                        b

                        b[x y z] = --------------- [xacute yacute f] f cot - xacute

                        (000)

                        3D image

                        Depth from Multiple Light Stripes

                        What are these objects

                        Our (former) System4-camera light-striping stereo

                        projector

                        rotationtable

                        cameras

                        3Dobject

                        Camera Model Recall there are 5 Different Frames of Reference

                        bull Object

                        bull World

                        bull Camera

                        bull Real Image

                        bull Pixel Image

                        yc

                        xc

                        zc

                        zwC

                        Wyw

                        xw

                        A

                        a

                        xf

                        yf

                        xp

                        yp

                        zppyramidobject

                        image

                        The Camera Model

                        How do we get an image point IP from a world point P

                        c11 c12 c13 c14

                        c21 c22 c23 c24

                        c31 c32 c33 1

                        s IPr

                        s IPc

                        s

                        Px

                        Py

                        Pz

                        1

                        =

                        imagepoint

                        camera matrix C worldpoint

                        Whatrsquos in C

                        The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                        1 CP = T R WP2 FP = (f) CP

                        s FPx

                        s FPy

                        s FPz

                        s

                        1 0 0 00 1 0 00 0 1 00 0 1f 0

                        CPx

                        CPy

                        CPz

                        1

                        =

                        perspectivetransformation

                        imagepoint

                        3D point incamera

                        coordinates

                        Why is there not a scale factor here

                        Camera Calibration

                        bull In order work in 3D we need to know the parameters of the particular camera setup

                        bull Solving for the camera parameters is called calibration

                        yw

                        xwzw

                        W

                        yc

                        xc

                        zc

                        C

                        bull intrinsic parameters are of the camera device

                        bull extrinsic parameters are where the camera sits in the world

                        Intrinsic Parameters

                        bull principal point (u0v0)

                        bull scale factors (dxdy)

                        bull aspect ratio distortion factor

                        bull focal length f

                        bull lens distortion factor (models radial lens distortion)

                        C

                        (u0v0)

                        f

                        Extrinsic Parameters

                        bull translation parameters t = [tx ty tz]

                        bull rotation matrix

                        r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                        R = Are there reallynine parameters

                        Calibration Object

                        The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                        The Tsai Procedure

                        bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                        bull Several images are taken of the calibration object yielding point correspondences at different distances

                        bull Tsairsquos algorithm requires n gt 5 correspondences

                        (xi yi zi) (ui vi)) | i = 1hellipn

                        between (real) image points and 3D points

                        bull Lots of details in Chapter 13

                        We use the camera parameters of each camera for general

                        stereoP

                        P1=(r1c1)P2=(r2c2)

                        y1

                        y2

                        x1

                        x2

                        e1

                        e2

                        B

                        C

                        For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                        1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                        r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                        r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                        Direct solution uses 3 equations wonrsquot give reliable results

                        Solve by computing the closestapproach of the two skew rays

                        V

                        If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                        P1

                        Q1

                        Psolve forshortest

                        V = (P1 + a1u1) ndash (Q1 + a2u2)

                        (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                        u1

                        u2

                        • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                        • 3D Shape from X means getting 3D coordinates from different methods
                        • Perspective Imaging Model 1D
                        • Perspective in 2D (Simplified)
                        • 3D from Stereo
                        • Depth Perception from Stereo Simple Model Parallel Optic Axes
                        • Resultant Depth Calculation
                        • Finding Correspondences
                        • 3 Main Matching Methods
                        • Epipolar Geometry Constraint 1 Normal Pair of Images
                        • Epipolar Geometry General Case
                        • Constraints
                        • Structured Light
                        • Structured Light 3D Computation
                        • Depth from Multiple Light Stripes
                        • Our (former) System 4-camera light-striping stereo
                        • Camera Model Recall there are 5 Different Frames of Reference
                        • The Camera Model
                        • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                        • Camera Calibration
                        • Intrinsic Parameters
                        • Extrinsic Parameters
                        • Calibration Object
                        • The Tsai Procedure
                        • We use the camera parameters of each camera for general stereo
                        • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                        • Solve by computing the closest approach of the two skew rays
                        • Slide 28
                        • Slide 29
                        • Slide 30
                        • Slide 31
                        • Slide 32
                        • Slide 33
                        • Slide 34
                        • Slide 35
                        • Slide 36
                        • Slide 37
                        • Slide 38
                        • Slide 39
                        • Slide 40

                          Structured Light

                          3D data can also be derived using

                          bull a single camera

                          bull a light source that can produce stripe(s) on the 3D object

                          lightsource

                          camera

                          light stripe

                          Structured Light3D Computation

                          3D data can also be derived using

                          bull a single camera

                          bull a light source that can produce stripe(s) on the 3D object

                          lightsource

                          x axisf

                          (xacuteyacutef)

                          3D point(x y z)

                          b

                          b[x y z] = --------------- [xacute yacute f] f cot - xacute

                          (000)

                          3D image

                          Depth from Multiple Light Stripes

                          What are these objects

                          Our (former) System4-camera light-striping stereo

                          projector

                          rotationtable

                          cameras

                          3Dobject

                          Camera Model Recall there are 5 Different Frames of Reference

                          bull Object

                          bull World

                          bull Camera

                          bull Real Image

                          bull Pixel Image

                          yc

                          xc

                          zc

                          zwC

                          Wyw

                          xw

                          A

                          a

                          xf

                          yf

                          xp

                          yp

                          zppyramidobject

                          image

                          The Camera Model

                          How do we get an image point IP from a world point P

                          c11 c12 c13 c14

                          c21 c22 c23 c24

                          c31 c32 c33 1

                          s IPr

                          s IPc

                          s

                          Px

                          Py

                          Pz

                          1

                          =

                          imagepoint

                          camera matrix C worldpoint

                          Whatrsquos in C

                          The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                          1 CP = T R WP2 FP = (f) CP

                          s FPx

                          s FPy

                          s FPz

                          s

                          1 0 0 00 1 0 00 0 1 00 0 1f 0

                          CPx

                          CPy

                          CPz

                          1

                          =

                          perspectivetransformation

                          imagepoint

                          3D point incamera

                          coordinates

                          Why is there not a scale factor here

                          Camera Calibration

                          bull In order work in 3D we need to know the parameters of the particular camera setup

                          bull Solving for the camera parameters is called calibration

                          yw

                          xwzw

                          W

                          yc

                          xc

                          zc

                          C

                          bull intrinsic parameters are of the camera device

                          bull extrinsic parameters are where the camera sits in the world

                          Intrinsic Parameters

                          bull principal point (u0v0)

                          bull scale factors (dxdy)

                          bull aspect ratio distortion factor

                          bull focal length f

                          bull lens distortion factor (models radial lens distortion)

                          C

                          (u0v0)

                          f

                          Extrinsic Parameters

                          bull translation parameters t = [tx ty tz]

                          bull rotation matrix

                          r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                          R = Are there reallynine parameters

                          Calibration Object

                          The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                          The Tsai Procedure

                          bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                          bull Several images are taken of the calibration object yielding point correspondences at different distances

                          bull Tsairsquos algorithm requires n gt 5 correspondences

                          (xi yi zi) (ui vi)) | i = 1hellipn

                          between (real) image points and 3D points

                          bull Lots of details in Chapter 13

                          We use the camera parameters of each camera for general

                          stereoP

                          P1=(r1c1)P2=(r2c2)

                          y1

                          y2

                          x1

                          x2

                          e1

                          e2

                          B

                          C

                          For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                          1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                          r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                          r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                          Direct solution uses 3 equations wonrsquot give reliable results

                          Solve by computing the closestapproach of the two skew rays

                          V

                          If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                          P1

                          Q1

                          Psolve forshortest

                          V = (P1 + a1u1) ndash (Q1 + a2u2)

                          (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                          u1

                          u2

                          • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                          • 3D Shape from X means getting 3D coordinates from different methods
                          • Perspective Imaging Model 1D
                          • Perspective in 2D (Simplified)
                          • 3D from Stereo
                          • Depth Perception from Stereo Simple Model Parallel Optic Axes
                          • Resultant Depth Calculation
                          • Finding Correspondences
                          • 3 Main Matching Methods
                          • Epipolar Geometry Constraint 1 Normal Pair of Images
                          • Epipolar Geometry General Case
                          • Constraints
                          • Structured Light
                          • Structured Light 3D Computation
                          • Depth from Multiple Light Stripes
                          • Our (former) System 4-camera light-striping stereo
                          • Camera Model Recall there are 5 Different Frames of Reference
                          • The Camera Model
                          • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                          • Camera Calibration
                          • Intrinsic Parameters
                          • Extrinsic Parameters
                          • Calibration Object
                          • The Tsai Procedure
                          • We use the camera parameters of each camera for general stereo
                          • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                          • Solve by computing the closest approach of the two skew rays
                          • Slide 28
                          • Slide 29
                          • Slide 30
                          • Slide 31
                          • Slide 32
                          • Slide 33
                          • Slide 34
                          • Slide 35
                          • Slide 36
                          • Slide 37
                          • Slide 38
                          • Slide 39
                          • Slide 40

                            Structured Light3D Computation

                            3D data can also be derived using

                            bull a single camera

                            bull a light source that can produce stripe(s) on the 3D object

                            lightsource

                            x axisf

                            (xacuteyacutef)

                            3D point(x y z)

                            b

                            b[x y z] = --------------- [xacute yacute f] f cot - xacute

                            (000)

                            3D image

                            Depth from Multiple Light Stripes

                            What are these objects

                            Our (former) System4-camera light-striping stereo

                            projector

                            rotationtable

                            cameras

                            3Dobject

                            Camera Model Recall there are 5 Different Frames of Reference

                            bull Object

                            bull World

                            bull Camera

                            bull Real Image

                            bull Pixel Image

                            yc

                            xc

                            zc

                            zwC

                            Wyw

                            xw

                            A

                            a

                            xf

                            yf

                            xp

                            yp

                            zppyramidobject

                            image

                            The Camera Model

                            How do we get an image point IP from a world point P

                            c11 c12 c13 c14

                            c21 c22 c23 c24

                            c31 c32 c33 1

                            s IPr

                            s IPc

                            s

                            Px

                            Py

                            Pz

                            1

                            =

                            imagepoint

                            camera matrix C worldpoint

                            Whatrsquos in C

                            The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                            1 CP = T R WP2 FP = (f) CP

                            s FPx

                            s FPy

                            s FPz

                            s

                            1 0 0 00 1 0 00 0 1 00 0 1f 0

                            CPx

                            CPy

                            CPz

                            1

                            =

                            perspectivetransformation

                            imagepoint

                            3D point incamera

                            coordinates

                            Why is there not a scale factor here

                            Camera Calibration

                            bull In order work in 3D we need to know the parameters of the particular camera setup

                            bull Solving for the camera parameters is called calibration

                            yw

                            xwzw

                            W

                            yc

                            xc

                            zc

                            C

                            bull intrinsic parameters are of the camera device

                            bull extrinsic parameters are where the camera sits in the world

                            Intrinsic Parameters

                            bull principal point (u0v0)

                            bull scale factors (dxdy)

                            bull aspect ratio distortion factor

                            bull focal length f

                            bull lens distortion factor (models radial lens distortion)

                            C

                            (u0v0)

                            f

                            Extrinsic Parameters

                            bull translation parameters t = [tx ty tz]

                            bull rotation matrix

                            r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                            R = Are there reallynine parameters

                            Calibration Object

                            The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                            The Tsai Procedure

                            bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                            bull Several images are taken of the calibration object yielding point correspondences at different distances

                            bull Tsairsquos algorithm requires n gt 5 correspondences

                            (xi yi zi) (ui vi)) | i = 1hellipn

                            between (real) image points and 3D points

                            bull Lots of details in Chapter 13

                            We use the camera parameters of each camera for general

                            stereoP

                            P1=(r1c1)P2=(r2c2)

                            y1

                            y2

                            x1

                            x2

                            e1

                            e2

                            B

                            C

                            For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                            1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                            r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                            r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                            Direct solution uses 3 equations wonrsquot give reliable results

                            Solve by computing the closestapproach of the two skew rays

                            V

                            If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                            P1

                            Q1

                            Psolve forshortest

                            V = (P1 + a1u1) ndash (Q1 + a2u2)

                            (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                            u1

                            u2

                            • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                            • 3D Shape from X means getting 3D coordinates from different methods
                            • Perspective Imaging Model 1D
                            • Perspective in 2D (Simplified)
                            • 3D from Stereo
                            • Depth Perception from Stereo Simple Model Parallel Optic Axes
                            • Resultant Depth Calculation
                            • Finding Correspondences
                            • 3 Main Matching Methods
                            • Epipolar Geometry Constraint 1 Normal Pair of Images
                            • Epipolar Geometry General Case
                            • Constraints
                            • Structured Light
                            • Structured Light 3D Computation
                            • Depth from Multiple Light Stripes
                            • Our (former) System 4-camera light-striping stereo
                            • Camera Model Recall there are 5 Different Frames of Reference
                            • The Camera Model
                            • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                            • Camera Calibration
                            • Intrinsic Parameters
                            • Extrinsic Parameters
                            • Calibration Object
                            • The Tsai Procedure
                            • We use the camera parameters of each camera for general stereo
                            • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                            • Solve by computing the closest approach of the two skew rays
                            • Slide 28
                            • Slide 29
                            • Slide 30
                            • Slide 31
                            • Slide 32
                            • Slide 33
                            • Slide 34
                            • Slide 35
                            • Slide 36
                            • Slide 37
                            • Slide 38
                            • Slide 39
                            • Slide 40

                              Depth from Multiple Light Stripes

                              What are these objects

                              Our (former) System4-camera light-striping stereo

                              projector

                              rotationtable

                              cameras

                              3Dobject

                              Camera Model Recall there are 5 Different Frames of Reference

                              bull Object

                              bull World

                              bull Camera

                              bull Real Image

                              bull Pixel Image

                              yc

                              xc

                              zc

                              zwC

                              Wyw

                              xw

                              A

                              a

                              xf

                              yf

                              xp

                              yp

                              zppyramidobject

                              image

                              The Camera Model

                              How do we get an image point IP from a world point P

                              c11 c12 c13 c14

                              c21 c22 c23 c24

                              c31 c32 c33 1

                              s IPr

                              s IPc

                              s

                              Px

                              Py

                              Pz

                              1

                              =

                              imagepoint

                              camera matrix C worldpoint

                              Whatrsquos in C

                              The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                              1 CP = T R WP2 FP = (f) CP

                              s FPx

                              s FPy

                              s FPz

                              s

                              1 0 0 00 1 0 00 0 1 00 0 1f 0

                              CPx

                              CPy

                              CPz

                              1

                              =

                              perspectivetransformation

                              imagepoint

                              3D point incamera

                              coordinates

                              Why is there not a scale factor here

                              Camera Calibration

                              bull In order work in 3D we need to know the parameters of the particular camera setup

                              bull Solving for the camera parameters is called calibration

                              yw

                              xwzw

                              W

                              yc

                              xc

                              zc

                              C

                              bull intrinsic parameters are of the camera device

                              bull extrinsic parameters are where the camera sits in the world

                              Intrinsic Parameters

                              bull principal point (u0v0)

                              bull scale factors (dxdy)

                              bull aspect ratio distortion factor

                              bull focal length f

                              bull lens distortion factor (models radial lens distortion)

                              C

                              (u0v0)

                              f

                              Extrinsic Parameters

                              bull translation parameters t = [tx ty tz]

                              bull rotation matrix

                              r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                              R = Are there reallynine parameters

                              Calibration Object

                              The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                              The Tsai Procedure

                              bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                              bull Several images are taken of the calibration object yielding point correspondences at different distances

                              bull Tsairsquos algorithm requires n gt 5 correspondences

                              (xi yi zi) (ui vi)) | i = 1hellipn

                              between (real) image points and 3D points

                              bull Lots of details in Chapter 13

                              We use the camera parameters of each camera for general

                              stereoP

                              P1=(r1c1)P2=(r2c2)

                              y1

                              y2

                              x1

                              x2

                              e1

                              e2

                              B

                              C

                              For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                              1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                              r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                              r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                              Direct solution uses 3 equations wonrsquot give reliable results

                              Solve by computing the closestapproach of the two skew rays

                              V

                              If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                              P1

                              Q1

                              Psolve forshortest

                              V = (P1 + a1u1) ndash (Q1 + a2u2)

                              (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                              u1

                              u2

                              • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                              • 3D Shape from X means getting 3D coordinates from different methods
                              • Perspective Imaging Model 1D
                              • Perspective in 2D (Simplified)
                              • 3D from Stereo
                              • Depth Perception from Stereo Simple Model Parallel Optic Axes
                              • Resultant Depth Calculation
                              • Finding Correspondences
                              • 3 Main Matching Methods
                              • Epipolar Geometry Constraint 1 Normal Pair of Images
                              • Epipolar Geometry General Case
                              • Constraints
                              • Structured Light
                              • Structured Light 3D Computation
                              • Depth from Multiple Light Stripes
                              • Our (former) System 4-camera light-striping stereo
                              • Camera Model Recall there are 5 Different Frames of Reference
                              • The Camera Model
                              • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                              • Camera Calibration
                              • Intrinsic Parameters
                              • Extrinsic Parameters
                              • Calibration Object
                              • The Tsai Procedure
                              • We use the camera parameters of each camera for general stereo
                              • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                              • Solve by computing the closest approach of the two skew rays
                              • Slide 28
                              • Slide 29
                              • Slide 30
                              • Slide 31
                              • Slide 32
                              • Slide 33
                              • Slide 34
                              • Slide 35
                              • Slide 36
                              • Slide 37
                              • Slide 38
                              • Slide 39
                              • Slide 40

                                Our (former) System4-camera light-striping stereo

                                projector

                                rotationtable

                                cameras

                                3Dobject

                                Camera Model Recall there are 5 Different Frames of Reference

                                bull Object

                                bull World

                                bull Camera

                                bull Real Image

                                bull Pixel Image

                                yc

                                xc

                                zc

                                zwC

                                Wyw

                                xw

                                A

                                a

                                xf

                                yf

                                xp

                                yp

                                zppyramidobject

                                image

                                The Camera Model

                                How do we get an image point IP from a world point P

                                c11 c12 c13 c14

                                c21 c22 c23 c24

                                c31 c32 c33 1

                                s IPr

                                s IPc

                                s

                                Px

                                Py

                                Pz

                                1

                                =

                                imagepoint

                                camera matrix C worldpoint

                                Whatrsquos in C

                                The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                                1 CP = T R WP2 FP = (f) CP

                                s FPx

                                s FPy

                                s FPz

                                s

                                1 0 0 00 1 0 00 0 1 00 0 1f 0

                                CPx

                                CPy

                                CPz

                                1

                                =

                                perspectivetransformation

                                imagepoint

                                3D point incamera

                                coordinates

                                Why is there not a scale factor here

                                Camera Calibration

                                bull In order work in 3D we need to know the parameters of the particular camera setup

                                bull Solving for the camera parameters is called calibration

                                yw

                                xwzw

                                W

                                yc

                                xc

                                zc

                                C

                                bull intrinsic parameters are of the camera device

                                bull extrinsic parameters are where the camera sits in the world

                                Intrinsic Parameters

                                bull principal point (u0v0)

                                bull scale factors (dxdy)

                                bull aspect ratio distortion factor

                                bull focal length f

                                bull lens distortion factor (models radial lens distortion)

                                C

                                (u0v0)

                                f

                                Extrinsic Parameters

                                bull translation parameters t = [tx ty tz]

                                bull rotation matrix

                                r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                R = Are there reallynine parameters

                                Calibration Object

                                The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                The Tsai Procedure

                                bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                bull Several images are taken of the calibration object yielding point correspondences at different distances

                                bull Tsairsquos algorithm requires n gt 5 correspondences

                                (xi yi zi) (ui vi)) | i = 1hellipn

                                between (real) image points and 3D points

                                bull Lots of details in Chapter 13

                                We use the camera parameters of each camera for general

                                stereoP

                                P1=(r1c1)P2=(r2c2)

                                y1

                                y2

                                x1

                                x2

                                e1

                                e2

                                B

                                C

                                For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                Direct solution uses 3 equations wonrsquot give reliable results

                                Solve by computing the closestapproach of the two skew rays

                                V

                                If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                P1

                                Q1

                                Psolve forshortest

                                V = (P1 + a1u1) ndash (Q1 + a2u2)

                                (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                u1

                                u2

                                • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                • 3D Shape from X means getting 3D coordinates from different methods
                                • Perspective Imaging Model 1D
                                • Perspective in 2D (Simplified)
                                • 3D from Stereo
                                • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                • Resultant Depth Calculation
                                • Finding Correspondences
                                • 3 Main Matching Methods
                                • Epipolar Geometry Constraint 1 Normal Pair of Images
                                • Epipolar Geometry General Case
                                • Constraints
                                • Structured Light
                                • Structured Light 3D Computation
                                • Depth from Multiple Light Stripes
                                • Our (former) System 4-camera light-striping stereo
                                • Camera Model Recall there are 5 Different Frames of Reference
                                • The Camera Model
                                • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                • Camera Calibration
                                • Intrinsic Parameters
                                • Extrinsic Parameters
                                • Calibration Object
                                • The Tsai Procedure
                                • We use the camera parameters of each camera for general stereo
                                • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                • Solve by computing the closest approach of the two skew rays
                                • Slide 28
                                • Slide 29
                                • Slide 30
                                • Slide 31
                                • Slide 32
                                • Slide 33
                                • Slide 34
                                • Slide 35
                                • Slide 36
                                • Slide 37
                                • Slide 38
                                • Slide 39
                                • Slide 40

                                  Camera Model Recall there are 5 Different Frames of Reference

                                  bull Object

                                  bull World

                                  bull Camera

                                  bull Real Image

                                  bull Pixel Image

                                  yc

                                  xc

                                  zc

                                  zwC

                                  Wyw

                                  xw

                                  A

                                  a

                                  xf

                                  yf

                                  xp

                                  yp

                                  zppyramidobject

                                  image

                                  The Camera Model

                                  How do we get an image point IP from a world point P

                                  c11 c12 c13 c14

                                  c21 c22 c23 c24

                                  c31 c32 c33 1

                                  s IPr

                                  s IPc

                                  s

                                  Px

                                  Py

                                  Pz

                                  1

                                  =

                                  imagepoint

                                  camera matrix C worldpoint

                                  Whatrsquos in C

                                  The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                                  1 CP = T R WP2 FP = (f) CP

                                  s FPx

                                  s FPy

                                  s FPz

                                  s

                                  1 0 0 00 1 0 00 0 1 00 0 1f 0

                                  CPx

                                  CPy

                                  CPz

                                  1

                                  =

                                  perspectivetransformation

                                  imagepoint

                                  3D point incamera

                                  coordinates

                                  Why is there not a scale factor here

                                  Camera Calibration

                                  bull In order work in 3D we need to know the parameters of the particular camera setup

                                  bull Solving for the camera parameters is called calibration

                                  yw

                                  xwzw

                                  W

                                  yc

                                  xc

                                  zc

                                  C

                                  bull intrinsic parameters are of the camera device

                                  bull extrinsic parameters are where the camera sits in the world

                                  Intrinsic Parameters

                                  bull principal point (u0v0)

                                  bull scale factors (dxdy)

                                  bull aspect ratio distortion factor

                                  bull focal length f

                                  bull lens distortion factor (models radial lens distortion)

                                  C

                                  (u0v0)

                                  f

                                  Extrinsic Parameters

                                  bull translation parameters t = [tx ty tz]

                                  bull rotation matrix

                                  r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                  R = Are there reallynine parameters

                                  Calibration Object

                                  The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                  The Tsai Procedure

                                  bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                  bull Several images are taken of the calibration object yielding point correspondences at different distances

                                  bull Tsairsquos algorithm requires n gt 5 correspondences

                                  (xi yi zi) (ui vi)) | i = 1hellipn

                                  between (real) image points and 3D points

                                  bull Lots of details in Chapter 13

                                  We use the camera parameters of each camera for general

                                  stereoP

                                  P1=(r1c1)P2=(r2c2)

                                  y1

                                  y2

                                  x1

                                  x2

                                  e1

                                  e2

                                  B

                                  C

                                  For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                  1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                  r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                  r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                  Direct solution uses 3 equations wonrsquot give reliable results

                                  Solve by computing the closestapproach of the two skew rays

                                  V

                                  If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                  P1

                                  Q1

                                  Psolve forshortest

                                  V = (P1 + a1u1) ndash (Q1 + a2u2)

                                  (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                  u1

                                  u2

                                  • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                  • 3D Shape from X means getting 3D coordinates from different methods
                                  • Perspective Imaging Model 1D
                                  • Perspective in 2D (Simplified)
                                  • 3D from Stereo
                                  • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                  • Resultant Depth Calculation
                                  • Finding Correspondences
                                  • 3 Main Matching Methods
                                  • Epipolar Geometry Constraint 1 Normal Pair of Images
                                  • Epipolar Geometry General Case
                                  • Constraints
                                  • Structured Light
                                  • Structured Light 3D Computation
                                  • Depth from Multiple Light Stripes
                                  • Our (former) System 4-camera light-striping stereo
                                  • Camera Model Recall there are 5 Different Frames of Reference
                                  • The Camera Model
                                  • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                  • Camera Calibration
                                  • Intrinsic Parameters
                                  • Extrinsic Parameters
                                  • Calibration Object
                                  • The Tsai Procedure
                                  • We use the camera parameters of each camera for general stereo
                                  • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                  • Solve by computing the closest approach of the two skew rays
                                  • Slide 28
                                  • Slide 29
                                  • Slide 30
                                  • Slide 31
                                  • Slide 32
                                  • Slide 33
                                  • Slide 34
                                  • Slide 35
                                  • Slide 36
                                  • Slide 37
                                  • Slide 38
                                  • Slide 39
                                  • Slide 40

                                    The Camera Model

                                    How do we get an image point IP from a world point P

                                    c11 c12 c13 c14

                                    c21 c22 c23 c24

                                    c31 c32 c33 1

                                    s IPr

                                    s IPc

                                    s

                                    Px

                                    Py

                                    Pz

                                    1

                                    =

                                    imagepoint

                                    camera matrix C worldpoint

                                    Whatrsquos in C

                                    The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                                    1 CP = T R WP2 FP = (f) CP

                                    s FPx

                                    s FPy

                                    s FPz

                                    s

                                    1 0 0 00 1 0 00 0 1 00 0 1f 0

                                    CPx

                                    CPy

                                    CPz

                                    1

                                    =

                                    perspectivetransformation

                                    imagepoint

                                    3D point incamera

                                    coordinates

                                    Why is there not a scale factor here

                                    Camera Calibration

                                    bull In order work in 3D we need to know the parameters of the particular camera setup

                                    bull Solving for the camera parameters is called calibration

                                    yw

                                    xwzw

                                    W

                                    yc

                                    xc

                                    zc

                                    C

                                    bull intrinsic parameters are of the camera device

                                    bull extrinsic parameters are where the camera sits in the world

                                    Intrinsic Parameters

                                    bull principal point (u0v0)

                                    bull scale factors (dxdy)

                                    bull aspect ratio distortion factor

                                    bull focal length f

                                    bull lens distortion factor (models radial lens distortion)

                                    C

                                    (u0v0)

                                    f

                                    Extrinsic Parameters

                                    bull translation parameters t = [tx ty tz]

                                    bull rotation matrix

                                    r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                    R = Are there reallynine parameters

                                    Calibration Object

                                    The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                    The Tsai Procedure

                                    bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                    bull Several images are taken of the calibration object yielding point correspondences at different distances

                                    bull Tsairsquos algorithm requires n gt 5 correspondences

                                    (xi yi zi) (ui vi)) | i = 1hellipn

                                    between (real) image points and 3D points

                                    bull Lots of details in Chapter 13

                                    We use the camera parameters of each camera for general

                                    stereoP

                                    P1=(r1c1)P2=(r2c2)

                                    y1

                                    y2

                                    x1

                                    x2

                                    e1

                                    e2

                                    B

                                    C

                                    For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                    1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                    r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                    r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                    Direct solution uses 3 equations wonrsquot give reliable results

                                    Solve by computing the closestapproach of the two skew rays

                                    V

                                    If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                    P1

                                    Q1

                                    Psolve forshortest

                                    V = (P1 + a1u1) ndash (Q1 + a2u2)

                                    (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                    u1

                                    u2

                                    • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                    • 3D Shape from X means getting 3D coordinates from different methods
                                    • Perspective Imaging Model 1D
                                    • Perspective in 2D (Simplified)
                                    • 3D from Stereo
                                    • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                    • Resultant Depth Calculation
                                    • Finding Correspondences
                                    • 3 Main Matching Methods
                                    • Epipolar Geometry Constraint 1 Normal Pair of Images
                                    • Epipolar Geometry General Case
                                    • Constraints
                                    • Structured Light
                                    • Structured Light 3D Computation
                                    • Depth from Multiple Light Stripes
                                    • Our (former) System 4-camera light-striping stereo
                                    • Camera Model Recall there are 5 Different Frames of Reference
                                    • The Camera Model
                                    • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                    • Camera Calibration
                                    • Intrinsic Parameters
                                    • Extrinsic Parameters
                                    • Calibration Object
                                    • The Tsai Procedure
                                    • We use the camera parameters of each camera for general stereo
                                    • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                    • Solve by computing the closest approach of the two skew rays
                                    • Slide 28
                                    • Slide 29
                                    • Slide 30
                                    • Slide 31
                                    • Slide 32
                                    • Slide 33
                                    • Slide 34
                                    • Slide 35
                                    • Slide 36
                                    • Slide 37
                                    • Slide 38
                                    • Slide 39
                                    • Slide 40

                                      The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates

                                      1 CP = T R WP2 FP = (f) CP

                                      s FPx

                                      s FPy

                                      s FPz

                                      s

                                      1 0 0 00 1 0 00 0 1 00 0 1f 0

                                      CPx

                                      CPy

                                      CPz

                                      1

                                      =

                                      perspectivetransformation

                                      imagepoint

                                      3D point incamera

                                      coordinates

                                      Why is there not a scale factor here

                                      Camera Calibration

                                      bull In order work in 3D we need to know the parameters of the particular camera setup

                                      bull Solving for the camera parameters is called calibration

                                      yw

                                      xwzw

                                      W

                                      yc

                                      xc

                                      zc

                                      C

                                      bull intrinsic parameters are of the camera device

                                      bull extrinsic parameters are where the camera sits in the world

                                      Intrinsic Parameters

                                      bull principal point (u0v0)

                                      bull scale factors (dxdy)

                                      bull aspect ratio distortion factor

                                      bull focal length f

                                      bull lens distortion factor (models radial lens distortion)

                                      C

                                      (u0v0)

                                      f

                                      Extrinsic Parameters

                                      bull translation parameters t = [tx ty tz]

                                      bull rotation matrix

                                      r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                      R = Are there reallynine parameters

                                      Calibration Object

                                      The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                      The Tsai Procedure

                                      bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                      bull Several images are taken of the calibration object yielding point correspondences at different distances

                                      bull Tsairsquos algorithm requires n gt 5 correspondences

                                      (xi yi zi) (ui vi)) | i = 1hellipn

                                      between (real) image points and 3D points

                                      bull Lots of details in Chapter 13

                                      We use the camera parameters of each camera for general

                                      stereoP

                                      P1=(r1c1)P2=(r2c2)

                                      y1

                                      y2

                                      x1

                                      x2

                                      e1

                                      e2

                                      B

                                      C

                                      For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                      1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                      r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                      r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                      Direct solution uses 3 equations wonrsquot give reliable results

                                      Solve by computing the closestapproach of the two skew rays

                                      V

                                      If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                      P1

                                      Q1

                                      Psolve forshortest

                                      V = (P1 + a1u1) ndash (Q1 + a2u2)

                                      (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                      u1

                                      u2

                                      • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                      • 3D Shape from X means getting 3D coordinates from different methods
                                      • Perspective Imaging Model 1D
                                      • Perspective in 2D (Simplified)
                                      • 3D from Stereo
                                      • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                      • Resultant Depth Calculation
                                      • Finding Correspondences
                                      • 3 Main Matching Methods
                                      • Epipolar Geometry Constraint 1 Normal Pair of Images
                                      • Epipolar Geometry General Case
                                      • Constraints
                                      • Structured Light
                                      • Structured Light 3D Computation
                                      • Depth from Multiple Light Stripes
                                      • Our (former) System 4-camera light-striping stereo
                                      • Camera Model Recall there are 5 Different Frames of Reference
                                      • The Camera Model
                                      • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                      • Camera Calibration
                                      • Intrinsic Parameters
                                      • Extrinsic Parameters
                                      • Calibration Object
                                      • The Tsai Procedure
                                      • We use the camera parameters of each camera for general stereo
                                      • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                      • Solve by computing the closest approach of the two skew rays
                                      • Slide 28
                                      • Slide 29
                                      • Slide 30
                                      • Slide 31
                                      • Slide 32
                                      • Slide 33
                                      • Slide 34
                                      • Slide 35
                                      • Slide 36
                                      • Slide 37
                                      • Slide 38
                                      • Slide 39
                                      • Slide 40

                                        Camera Calibration

                                        bull In order work in 3D we need to know the parameters of the particular camera setup

                                        bull Solving for the camera parameters is called calibration

                                        yw

                                        xwzw

                                        W

                                        yc

                                        xc

                                        zc

                                        C

                                        bull intrinsic parameters are of the camera device

                                        bull extrinsic parameters are where the camera sits in the world

                                        Intrinsic Parameters

                                        bull principal point (u0v0)

                                        bull scale factors (dxdy)

                                        bull aspect ratio distortion factor

                                        bull focal length f

                                        bull lens distortion factor (models radial lens distortion)

                                        C

                                        (u0v0)

                                        f

                                        Extrinsic Parameters

                                        bull translation parameters t = [tx ty tz]

                                        bull rotation matrix

                                        r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                        R = Are there reallynine parameters

                                        Calibration Object

                                        The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                        The Tsai Procedure

                                        bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                        bull Several images are taken of the calibration object yielding point correspondences at different distances

                                        bull Tsairsquos algorithm requires n gt 5 correspondences

                                        (xi yi zi) (ui vi)) | i = 1hellipn

                                        between (real) image points and 3D points

                                        bull Lots of details in Chapter 13

                                        We use the camera parameters of each camera for general

                                        stereoP

                                        P1=(r1c1)P2=(r2c2)

                                        y1

                                        y2

                                        x1

                                        x2

                                        e1

                                        e2

                                        B

                                        C

                                        For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                        1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                        r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                        r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                        Direct solution uses 3 equations wonrsquot give reliable results

                                        Solve by computing the closestapproach of the two skew rays

                                        V

                                        If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                        P1

                                        Q1

                                        Psolve forshortest

                                        V = (P1 + a1u1) ndash (Q1 + a2u2)

                                        (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                        u1

                                        u2

                                        • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                        • 3D Shape from X means getting 3D coordinates from different methods
                                        • Perspective Imaging Model 1D
                                        • Perspective in 2D (Simplified)
                                        • 3D from Stereo
                                        • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                        • Resultant Depth Calculation
                                        • Finding Correspondences
                                        • 3 Main Matching Methods
                                        • Epipolar Geometry Constraint 1 Normal Pair of Images
                                        • Epipolar Geometry General Case
                                        • Constraints
                                        • Structured Light
                                        • Structured Light 3D Computation
                                        • Depth from Multiple Light Stripes
                                        • Our (former) System 4-camera light-striping stereo
                                        • Camera Model Recall there are 5 Different Frames of Reference
                                        • The Camera Model
                                        • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                        • Camera Calibration
                                        • Intrinsic Parameters
                                        • Extrinsic Parameters
                                        • Calibration Object
                                        • The Tsai Procedure
                                        • We use the camera parameters of each camera for general stereo
                                        • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                        • Solve by computing the closest approach of the two skew rays
                                        • Slide 28
                                        • Slide 29
                                        • Slide 30
                                        • Slide 31
                                        • Slide 32
                                        • Slide 33
                                        • Slide 34
                                        • Slide 35
                                        • Slide 36
                                        • Slide 37
                                        • Slide 38
                                        • Slide 39
                                        • Slide 40

                                          Intrinsic Parameters

                                          bull principal point (u0v0)

                                          bull scale factors (dxdy)

                                          bull aspect ratio distortion factor

                                          bull focal length f

                                          bull lens distortion factor (models radial lens distortion)

                                          C

                                          (u0v0)

                                          f

                                          Extrinsic Parameters

                                          bull translation parameters t = [tx ty tz]

                                          bull rotation matrix

                                          r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                          R = Are there reallynine parameters

                                          Calibration Object

                                          The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                          The Tsai Procedure

                                          bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                          bull Several images are taken of the calibration object yielding point correspondences at different distances

                                          bull Tsairsquos algorithm requires n gt 5 correspondences

                                          (xi yi zi) (ui vi)) | i = 1hellipn

                                          between (real) image points and 3D points

                                          bull Lots of details in Chapter 13

                                          We use the camera parameters of each camera for general

                                          stereoP

                                          P1=(r1c1)P2=(r2c2)

                                          y1

                                          y2

                                          x1

                                          x2

                                          e1

                                          e2

                                          B

                                          C

                                          For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                          1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                          r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                          r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                          Direct solution uses 3 equations wonrsquot give reliable results

                                          Solve by computing the closestapproach of the two skew rays

                                          V

                                          If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                          P1

                                          Q1

                                          Psolve forshortest

                                          V = (P1 + a1u1) ndash (Q1 + a2u2)

                                          (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                          u1

                                          u2

                                          • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                          • 3D Shape from X means getting 3D coordinates from different methods
                                          • Perspective Imaging Model 1D
                                          • Perspective in 2D (Simplified)
                                          • 3D from Stereo
                                          • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                          • Resultant Depth Calculation
                                          • Finding Correspondences
                                          • 3 Main Matching Methods
                                          • Epipolar Geometry Constraint 1 Normal Pair of Images
                                          • Epipolar Geometry General Case
                                          • Constraints
                                          • Structured Light
                                          • Structured Light 3D Computation
                                          • Depth from Multiple Light Stripes
                                          • Our (former) System 4-camera light-striping stereo
                                          • Camera Model Recall there are 5 Different Frames of Reference
                                          • The Camera Model
                                          • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                          • Camera Calibration
                                          • Intrinsic Parameters
                                          • Extrinsic Parameters
                                          • Calibration Object
                                          • The Tsai Procedure
                                          • We use the camera parameters of each camera for general stereo
                                          • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                          • Solve by computing the closest approach of the two skew rays
                                          • Slide 28
                                          • Slide 29
                                          • Slide 30
                                          • Slide 31
                                          • Slide 32
                                          • Slide 33
                                          • Slide 34
                                          • Slide 35
                                          • Slide 36
                                          • Slide 37
                                          • Slide 38
                                          • Slide 39
                                          • Slide 40

                                            Extrinsic Parameters

                                            bull translation parameters t = [tx ty tz]

                                            bull rotation matrix

                                            r11 r12 r13 0r21 r22 r23 0r31 r32 r33 00 0 0 1

                                            R = Are there reallynine parameters

                                            Calibration Object

                                            The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                            The Tsai Procedure

                                            bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                            bull Several images are taken of the calibration object yielding point correspondences at different distances

                                            bull Tsairsquos algorithm requires n gt 5 correspondences

                                            (xi yi zi) (ui vi)) | i = 1hellipn

                                            between (real) image points and 3D points

                                            bull Lots of details in Chapter 13

                                            We use the camera parameters of each camera for general

                                            stereoP

                                            P1=(r1c1)P2=(r2c2)

                                            y1

                                            y2

                                            x1

                                            x2

                                            e1

                                            e2

                                            B

                                            C

                                            For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                            1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                            r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                            r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                            Direct solution uses 3 equations wonrsquot give reliable results

                                            Solve by computing the closestapproach of the two skew rays

                                            V

                                            If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                            P1

                                            Q1

                                            Psolve forshortest

                                            V = (P1 + a1u1) ndash (Q1 + a2u2)

                                            (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                            u1

                                            u2

                                            • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                            • 3D Shape from X means getting 3D coordinates from different methods
                                            • Perspective Imaging Model 1D
                                            • Perspective in 2D (Simplified)
                                            • 3D from Stereo
                                            • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                            • Resultant Depth Calculation
                                            • Finding Correspondences
                                            • 3 Main Matching Methods
                                            • Epipolar Geometry Constraint 1 Normal Pair of Images
                                            • Epipolar Geometry General Case
                                            • Constraints
                                            • Structured Light
                                            • Structured Light 3D Computation
                                            • Depth from Multiple Light Stripes
                                            • Our (former) System 4-camera light-striping stereo
                                            • Camera Model Recall there are 5 Different Frames of Reference
                                            • The Camera Model
                                            • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                            • Camera Calibration
                                            • Intrinsic Parameters
                                            • Extrinsic Parameters
                                            • Calibration Object
                                            • The Tsai Procedure
                                            • We use the camera parameters of each camera for general stereo
                                            • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                            • Solve by computing the closest approach of the two skew rays
                                            • Slide 28
                                            • Slide 29
                                            • Slide 30
                                            • Slide 31
                                            • Slide 32
                                            • Slide 33
                                            • Slide 34
                                            • Slide 35
                                            • Slide 36
                                            • Slide 37
                                            • Slide 38
                                            • Slide 39
                                            • Slide 40

                                              Calibration Object

                                              The idea is to snapimages at differentdepths and get alot of 2D-3D pointcorrespondences

                                              The Tsai Procedure

                                              bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                              bull Several images are taken of the calibration object yielding point correspondences at different distances

                                              bull Tsairsquos algorithm requires n gt 5 correspondences

                                              (xi yi zi) (ui vi)) | i = 1hellipn

                                              between (real) image points and 3D points

                                              bull Lots of details in Chapter 13

                                              We use the camera parameters of each camera for general

                                              stereoP

                                              P1=(r1c1)P2=(r2c2)

                                              y1

                                              y2

                                              x1

                                              x2

                                              e1

                                              e2

                                              B

                                              C

                                              For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                              1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                              r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                              r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                              Direct solution uses 3 equations wonrsquot give reliable results

                                              Solve by computing the closestapproach of the two skew rays

                                              V

                                              If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                              P1

                                              Q1

                                              Psolve forshortest

                                              V = (P1 + a1u1) ndash (Q1 + a2u2)

                                              (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                              u1

                                              u2

                                              • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                              • 3D Shape from X means getting 3D coordinates from different methods
                                              • Perspective Imaging Model 1D
                                              • Perspective in 2D (Simplified)
                                              • 3D from Stereo
                                              • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                              • Resultant Depth Calculation
                                              • Finding Correspondences
                                              • 3 Main Matching Methods
                                              • Epipolar Geometry Constraint 1 Normal Pair of Images
                                              • Epipolar Geometry General Case
                                              • Constraints
                                              • Structured Light
                                              • Structured Light 3D Computation
                                              • Depth from Multiple Light Stripes
                                              • Our (former) System 4-camera light-striping stereo
                                              • Camera Model Recall there are 5 Different Frames of Reference
                                              • The Camera Model
                                              • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                              • Camera Calibration
                                              • Intrinsic Parameters
                                              • Extrinsic Parameters
                                              • Calibration Object
                                              • The Tsai Procedure
                                              • We use the camera parameters of each camera for general stereo
                                              • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                              • Solve by computing the closest approach of the two skew rays
                                              • Slide 28
                                              • Slide 29
                                              • Slide 30
                                              • Slide 31
                                              • Slide 32
                                              • Slide 33
                                              • Slide 34
                                              • Slide 35
                                              • Slide 36
                                              • Slide 37
                                              • Slide 38
                                              • Slide 39
                                              • Slide 40

                                                The Tsai Procedure

                                                bull The Tsai procedure was developed by Roger Tsai at IBM Research and is most widely used

                                                bull Several images are taken of the calibration object yielding point correspondences at different distances

                                                bull Tsairsquos algorithm requires n gt 5 correspondences

                                                (xi yi zi) (ui vi)) | i = 1hellipn

                                                between (real) image points and 3D points

                                                bull Lots of details in Chapter 13

                                                We use the camera parameters of each camera for general

                                                stereoP

                                                P1=(r1c1)P2=(r2c2)

                                                y1

                                                y2

                                                x1

                                                x2

                                                e1

                                                e2

                                                B

                                                C

                                                For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                                1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                                r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                                r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                                Direct solution uses 3 equations wonrsquot give reliable results

                                                Solve by computing the closestapproach of the two skew rays

                                                V

                                                If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                                P1

                                                Q1

                                                Psolve forshortest

                                                V = (P1 + a1u1) ndash (Q1 + a2u2)

                                                (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                                u1

                                                u2

                                                • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                                • 3D Shape from X means getting 3D coordinates from different methods
                                                • Perspective Imaging Model 1D
                                                • Perspective in 2D (Simplified)
                                                • 3D from Stereo
                                                • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                                • Resultant Depth Calculation
                                                • Finding Correspondences
                                                • 3 Main Matching Methods
                                                • Epipolar Geometry Constraint 1 Normal Pair of Images
                                                • Epipolar Geometry General Case
                                                • Constraints
                                                • Structured Light
                                                • Structured Light 3D Computation
                                                • Depth from Multiple Light Stripes
                                                • Our (former) System 4-camera light-striping stereo
                                                • Camera Model Recall there are 5 Different Frames of Reference
                                                • The Camera Model
                                                • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                                • Camera Calibration
                                                • Intrinsic Parameters
                                                • Extrinsic Parameters
                                                • Calibration Object
                                                • The Tsai Procedure
                                                • We use the camera parameters of each camera for general stereo
                                                • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                                • Solve by computing the closest approach of the two skew rays
                                                • Slide 28
                                                • Slide 29
                                                • Slide 30
                                                • Slide 31
                                                • Slide 32
                                                • Slide 33
                                                • Slide 34
                                                • Slide 35
                                                • Slide 36
                                                • Slide 37
                                                • Slide 38
                                                • Slide 39
                                                • Slide 40

                                                  We use the camera parameters of each camera for general

                                                  stereoP

                                                  P1=(r1c1)P2=(r2c2)

                                                  y1

                                                  y2

                                                  x1

                                                  x2

                                                  e1

                                                  e2

                                                  B

                                                  C

                                                  For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                                  1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                                  r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                                  r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                                  Direct solution uses 3 equations wonrsquot give reliable results

                                                  Solve by computing the closestapproach of the two skew rays

                                                  V

                                                  If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                                  P1

                                                  Q1

                                                  Psolve forshortest

                                                  V = (P1 + a1u1) ndash (Q1 + a2u2)

                                                  (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                                  u1

                                                  u2

                                                  • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                                  • 3D Shape from X means getting 3D coordinates from different methods
                                                  • Perspective Imaging Model 1D
                                                  • Perspective in 2D (Simplified)
                                                  • 3D from Stereo
                                                  • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                                  • Resultant Depth Calculation
                                                  • Finding Correspondences
                                                  • 3 Main Matching Methods
                                                  • Epipolar Geometry Constraint 1 Normal Pair of Images
                                                  • Epipolar Geometry General Case
                                                  • Constraints
                                                  • Structured Light
                                                  • Structured Light 3D Computation
                                                  • Depth from Multiple Light Stripes
                                                  • Our (former) System 4-camera light-striping stereo
                                                  • Camera Model Recall there are 5 Different Frames of Reference
                                                  • The Camera Model
                                                  • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                                  • Camera Calibration
                                                  • Intrinsic Parameters
                                                  • Extrinsic Parameters
                                                  • Calibration Object
                                                  • The Tsai Procedure
                                                  • We use the camera parameters of each camera for general stereo
                                                  • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                                  • Solve by computing the closest approach of the two skew rays
                                                  • Slide 28
                                                  • Slide 29
                                                  • Slide 30
                                                  • Slide 31
                                                  • Slide 32
                                                  • Slide 33
                                                  • Slide 34
                                                  • Slide 35
                                                  • Slide 36
                                                  • Slide 37
                                                  • Slide 38
                                                  • Slide 39
                                                  • Slide 40

                                                    For a correspondence (r1c1) inimage 1 to (r2c2) in image 2

                                                    1 Both cameras were calibrated Both camera matrices are then known From the two camera equations B and C we get 4 linear equations in 3 unknowns

                                                    r1 = (b11 - b31r1)x + (b12 - b32r1)y + (b13-b33r1)zc1 = (b21 - b31c1)x + (b22 - b32c1)y + (b23-b33c1)z

                                                    r2 = (c11 - c31r2)x + (c12 - c32r2)y + (c13 - c33r2)zc2 = (c21 - c31c2)x + (c22 - c32c2)y + (c23 - c33c2)z

                                                    Direct solution uses 3 equations wonrsquot give reliable results

                                                    Solve by computing the closestapproach of the two skew rays

                                                    V

                                                    If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                                    P1

                                                    Q1

                                                    Psolve forshortest

                                                    V = (P1 + a1u1) ndash (Q1 + a2u2)

                                                    (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                                    u1

                                                    u2

                                                    • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                                    • 3D Shape from X means getting 3D coordinates from different methods
                                                    • Perspective Imaging Model 1D
                                                    • Perspective in 2D (Simplified)
                                                    • 3D from Stereo
                                                    • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                                    • Resultant Depth Calculation
                                                    • Finding Correspondences
                                                    • 3 Main Matching Methods
                                                    • Epipolar Geometry Constraint 1 Normal Pair of Images
                                                    • Epipolar Geometry General Case
                                                    • Constraints
                                                    • Structured Light
                                                    • Structured Light 3D Computation
                                                    • Depth from Multiple Light Stripes
                                                    • Our (former) System 4-camera light-striping stereo
                                                    • Camera Model Recall there are 5 Different Frames of Reference
                                                    • The Camera Model
                                                    • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                                    • Camera Calibration
                                                    • Intrinsic Parameters
                                                    • Extrinsic Parameters
                                                    • Calibration Object
                                                    • The Tsai Procedure
                                                    • We use the camera parameters of each camera for general stereo
                                                    • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                                    • Solve by computing the closest approach of the two skew rays
                                                    • Slide 28
                                                    • Slide 29
                                                    • Slide 30
                                                    • Slide 31
                                                    • Slide 32
                                                    • Slide 33
                                                    • Slide 34
                                                    • Slide 35
                                                    • Slide 36
                                                    • Slide 37
                                                    • Slide 38
                                                    • Slide 39
                                                    • Slide 40

                                                      Solve by computing the closestapproach of the two skew rays

                                                      V

                                                      If the rays intersected perfectly in 3D the intersection would be PInstead we solve for the shortest line segment connecting the two rays and let P be its midpoint

                                                      P1

                                                      Q1

                                                      Psolve forshortest

                                                      V = (P1 + a1u1) ndash (Q1 + a2u2)

                                                      (P1 + a1u1) ndash (Q1 + a2u2) u1 = 0(P1 + a1u1) ndash (Q1 + a2u2) u2 = 0

                                                      u1

                                                      u2

                                                      • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                                      • 3D Shape from X means getting 3D coordinates from different methods
                                                      • Perspective Imaging Model 1D
                                                      • Perspective in 2D (Simplified)
                                                      • 3D from Stereo
                                                      • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                                      • Resultant Depth Calculation
                                                      • Finding Correspondences
                                                      • 3 Main Matching Methods
                                                      • Epipolar Geometry Constraint 1 Normal Pair of Images
                                                      • Epipolar Geometry General Case
                                                      • Constraints
                                                      • Structured Light
                                                      • Structured Light 3D Computation
                                                      • Depth from Multiple Light Stripes
                                                      • Our (former) System 4-camera light-striping stereo
                                                      • Camera Model Recall there are 5 Different Frames of Reference
                                                      • The Camera Model
                                                      • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                                      • Camera Calibration
                                                      • Intrinsic Parameters
                                                      • Extrinsic Parameters
                                                      • Calibration Object
                                                      • The Tsai Procedure
                                                      • We use the camera parameters of each camera for general stereo
                                                      • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                                      • Solve by computing the closest approach of the two skew rays
                                                      • Slide 28
                                                      • Slide 29
                                                      • Slide 30
                                                      • Slide 31
                                                      • Slide 32
                                                      • Slide 33
                                                      • Slide 34
                                                      • Slide 35
                                                      • Slide 36
                                                      • Slide 37
                                                      • Slide 38
                                                      • Slide 39
                                                      • Slide 40
                                                        • 3D Sensing and Reconstruction Readings Ch 12 125-6 Ch 13 131-3 1394
                                                        • 3D Shape from X means getting 3D coordinates from different methods
                                                        • Perspective Imaging Model 1D
                                                        • Perspective in 2D (Simplified)
                                                        • 3D from Stereo
                                                        • Depth Perception from Stereo Simple Model Parallel Optic Axes
                                                        • Resultant Depth Calculation
                                                        • Finding Correspondences
                                                        • 3 Main Matching Methods
                                                        • Epipolar Geometry Constraint 1 Normal Pair of Images
                                                        • Epipolar Geometry General Case
                                                        • Constraints
                                                        • Structured Light
                                                        • Structured Light 3D Computation
                                                        • Depth from Multiple Light Stripes
                                                        • Our (former) System 4-camera light-striping stereo
                                                        • Camera Model Recall there are 5 Different Frames of Reference
                                                        • The Camera Model
                                                        • The camera model handles the rigid body transformation from world coordinates to camera coordinates plus the perspective transformation to image coordinates
                                                        • Camera Calibration
                                                        • Intrinsic Parameters
                                                        • Extrinsic Parameters
                                                        • Calibration Object
                                                        • The Tsai Procedure
                                                        • We use the camera parameters of each camera for general stereo
                                                        • For a correspondence (r1c1) in image 1 to (r2c2) in image 2
                                                        • Solve by computing the closest approach of the two skew rays
                                                        • Slide 28
                                                        • Slide 29
                                                        • Slide 30
                                                        • Slide 31
                                                        • Slide 32
                                                        • Slide 33
                                                        • Slide 34
                                                        • Slide 35
                                                        • Slide 36
                                                        • Slide 37
                                                        • Slide 38
                                                        • Slide 39
                                                        • Slide 40

                                                          top related