YOU ARE DOWNLOADING DOCUMENT

Please tick the box to continue:

Transcript
  • Supplemental Material forDepth from Gradients in Dense Light Fields

    for Object Reconstruction

    Kaan Yücer1,2 Changil Kim1 Alexander Sorkine-Hornung2 Olga Sorkine-Hornung1

    1ETH Zurich 2Disney Research

    1. Depth from gradientIn Section 3.1 of the paper, we discussed how depth can be

    computed using light field gradients. Here, we will describethe gradient computation on the light field patch si,j(p,q)in more detail. Given a 2×5 patch, our aim is to compute thetrajectory direction around p, which is perpendicular to thegradient direction around the same pixel. Since the patch sizeis 2× 5, the gradient computation is not well-defined, andusing forward differences will lead to a 0.5 pixel shift of thecomputed values. However, this problem can easily be solvedby employing a 5 × 5 patch, interpolated from si,j(p,q):The first and the last rows of the new patch s∗i,j(p,q) aretaken from the original patch, whereas the central 3 patchesare interpolated linearly.

    s∗i,j(p,q) =

    a

    3/4a+ 1/4b1/2a+ 1/2b1/4a+ 3/4b

    b

    (1)where a and b are the two rows of si,j(p,q). Given this newpatch, we apply a Sobel filter on s∗i,j(p,q) and compute thegradients∇xs∗i,j(p,q) and∇ys∗i,j(p,q). The linear interpo-lation between a and b keeps the relationship between thepixels in the x dimension the same, which is reflected in thecomputed gradients:

    ∇xsi,j(p,q) = ∇xs∗i,j(p,q). (2)

    In the y dimension, we stretch the original patch by a factorof 4, such that the distance betwwen p and q increases froma single pixel to 4 pixels. This means that the gradients arealso related by the same factor:

    ∇ysi,j(p,q) = 4 · ∇ys∗i,j(p,q). (3)

    Note that we do not explicitly compute s∗i,j(p,q) or its gradi-ents, but pre-compute the factors for computing the gradientsof si,j(p,q) directly.

    Figure 1: Given the light field patch si,j(p,q), and thegradients ∇xsi,j(p,q) in green and ∇ysi,j(p,q) in blue,the gradient direction θi,j(p,q) and the trajectory directionγi,j(p,q) can be computed using simple trigonometry. Thegradient and trajectory directions are shown in cyan andyellow, respectively.

    Given the two gradients, the gradient direction in the lightfield patch si,j(p,q) is computed as:

    θi,j(p,q) = tan−1(∇ysi,j(p,q)/∇xsi,j(p,q)). (4)

    Since the direction of the trajectory γ is perpendicuar tothe gradient direction θ (change in color is minimal in thetrajectory direction), we can compute it as:

    γi,j(p,q) = tan−1(−∇xsi,j(p,q)/∇ysi,j(p,q)). (5)

    See Figure 1 for a visualization of the gradient and trajectorydirections.

    Now that we know the gradient and trajectory directions,we can compute where p maps to in the second row ofsi,j(p,q). Between the two rows, the motion along the ydirection equals 1. Given that the light field patch is centeredaround p and q, meaning that they have x coordinates 0,the motion along the x direction should equal psj , i.e. thex coordinate of the point, where p maps to in the secondrow of the patch. The trajectory direction γi,j(p,q) shouldremain the same, meaning:

    γi,j(p,q) = tan−1(1/psj). (6)

    We can compute psj simply by:

    psj = 1/ tan(γi,j(p,q)). (7)

    1

  • Trun

    ksTh

    in P

    lant

    OursACTS

    Figure 2: Comparison of our technique to the ACTS soft-ware [2], with close-ups on the reconstructed meshes. SeeSection 2 for a detailed discussion.

    2. Comparison to ACTSHere, we elaborate more on the comparison to the ACTS

    software [2], this time meshing the point clouds of ACTS.Since this software produces per-view depth maps withoutnormal information, we first compute per-view normals us-ing PCA over small patches centered at every pixel of thedepth maps. We then merge the depth maps into a globalpoint cloud, use a bounding box to filter only the foregroudpoints, and mesh them using Poisson surface reconstruc-tion [1], see Figure 2.

    Our reconstruction results are more faithful to the object,and can generate more details, especially in the THIN PLANTdataset, where most of the leaves are either removed ormerged in the reconstructions of ACTS. Since the pointclouds of ACTS can be noisy (see paper, Figure 6), thesurface reconstruction step cannot generate the fine details.As for the TRUNKS dataset, ACTS combined with Poissonreconstruction generates similar results to ours. However,it also merges two tree trunks, and generates extra floatingregions, due to the consistent noise in the point clouds (seepaper, Figure 6).

    References[1] M. Kazhdan and H. Hoppe. Screened poisson surface recon-

    struction. ACM Trans. Graph., 32(3), 2013. 2[2] G. Zhang, J. Jia, T. Wong, and H. Bao. Consistent depth maps

    recovery from a video sequence. PAMI, 31(6), 2009. 2


Related Documents