Top Banner
SHI Weili, Tsinghua University Photo from Lytro.com Light Field Camera from scientific research to a $50-million business
61

Lytro Light Field Camera: from scientific research to a $50-million business

May 17, 2015

Download

Technology

Weili Shi

I prepared these slides while I had somehow lost myself. Lytro and its story make one willing to believe again, those brave crazy ones who would like to change the world.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lytro Light Field Camera: from scientific research to a $50-million business

SHI Weili, Tsinghua UniversityPhoto from Lytro.com

Light Field Camerafrom scientific research to a $50-million business

Page 2: Lytro Light Field Camera: from scientific research to a $50-million business

With big aperture, DCLRs can achieve shallow focus.

Page 3: Lytro Light Field Camera: from scientific research to a $50-million business

Consumer DCs produce photos with deep focus.

Page 4: Lytro Light Field Camera: from scientific research to a $50-million business

What we’d prefer...(post-processed with Adobe Photoshop)

Page 5: Lytro Light Field Camera: from scientific research to a $50-million business

But how about loss of focus?... (previously irreparable)

Page 6: Lytro Light Field Camera: from scientific research to a $50-million business

Photos from Lytro.com

Introducing Lytro: refocus afterwards!

Page 8: Lytro Light Field Camera: from scientific research to a $50-million business

+ Refocusing

+ Speed

+ Low-light sensitivity

+ 3D images

- Low resolution

Technological features

+ Easy sharing

+ Extending depth of field

Page 9: Lytro Light Field Camera: from scientific research to a $50-million business

"Our vision is a product that allows people to shoot and share very simply."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 10: Lytro Light Field Camera: from scientific research to a $50-million business

Industrial design

Photo from Lytro.com

Page 11: Lytro Light Field Camera: from scientific research to a $50-million business

Photo from Lytro.com

Industrial design

Page 12: Lytro Light Field Camera: from scientific research to a $50-million business

Interaction design

Photo from Lytro.com

Page 14: Lytro Light Field Camera: from scientific research to a $50-million business

"We worked really hard to create an iconic design that really conveys the idea that this is ‘camera 3.0’."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 15: Lytro Light Field Camera: from scientific research to a $50-million business
Page 16: Lytro Light Field Camera: from scientific research to a $50-million business

Depth of field: source of blurriness

Figure from Wikipedia

Page 17: Lytro Light Field Camera: from scientific research to a $50-million business

Some information is missing:If we know the direction of each ray of light, we can trace the rays back to their source, and don’t have to mix them up.

Depth of field: source of blurriness

Page 18: Lytro Light Field Camera: from scientific research to a $50-million business

4D light field

Figures from Ng, R. 2005. Fourier slice photography.

Page 19: Lytro Light Field Camera: from scientific research to a $50-million business

2D light field study with a linear camera chapter . light fields and photographs

Figure .: Parameterization for the light 3eld 4owing into the camera.

the photosensor. Let us refer to u as the directional axis, because the u intercept on the lensdetermines the direction at which the ray strikes the sensor. In addition, let us refer to x asthe spatial axis. Of course in general the ray exists in d andwewould consider intersections(u, v) at the lens and (x, y) on the 3lm plane. Let us refer to the value of the light 3eld alongthe depicted ray as L(x, y, u, v), or L(x, u) if we are considering the d simpli3cation.

6e Cartesian ray-space diagram on the right in Figure . is a more abstract represen-tation of the two-dimensional light 3eld. 6e ray depicted on the le7 is shown as a point(x, u) on the Cartesian ray-space. In general each possible ray in the diagram on the le7corresponds to a di8erent point on the ray-space diagram on the right, as suggested by Fig-ure .. 6e function de3ned over the ray-space plane is the d light 3eld. Adelson andBergen [Adelson and Bergen ] used these kinds of diagrams to illustrate simple fea-tures in the plenoptic function. Levoy and Hanrahan [] used it to visualize the density

Figure from Ng, R. 2006. Digital light field photography.

parameterization for the light field flowing into the camera

Page 20: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

.. photograph formation

Figure .: 0e set of all rays 1owing into the camera.

of rays in a sampled light 2eld, and it has become very common in the light 2eld literature.

2.3 Photograph Formation

In a conventional camera, a photograph forms on a piece of photosensitive material placedinside the camera at the imaging plane. 0e material may be silver-halide 2lm in traditionalcameras, where photons cause the development of silver crystals, or a ccd or cmos photo-sensor in a digital camera, where photons generate free electrons that accumulate in eachsensor pixel. Each position on the photosensitive imaging plane sums all the rays of lightthat terminate there.

In general, the weight of each ray in the sum depends on its incident direction with thesensor plane. For example, radiometry predicts that rays from the periphery of the lens,

the set of all rays flowing into the camera

2D light field study with a linear camera

Page 21: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

the cone of rays summed to produce one pixel in a photograph

chapter . light fields and photographs

Figure .: 4e cone of rays summed to produce one pixel in a photograph.

which arrive at the sensor from more oblique angles, contribute less energy to the value ofthe pixel. Another example is that the photosensitive portion of a pixel in a cmos sensor istypically obscured by an overlay of metal wires [Catrysse and Wandell ], so rays fromunshadowed directions will contribute more light. Nevertheless, these directional e7ectsare in some sense undesirable artifacts due to physical or implementation limitations, andFigure . neglects them in illustrating the formation of somewhat idealized photographs.

Figure . draws in blue the cone of rays contributing to one photograph pixel value.4is cone corresponds (in d) to the blue vertical strip on the ray-space diagram becausethe rays in the cone share the same x 8lm intercept, but vary over all u positions on the lens.Of course di7erent pixels in the photograph have di7erent x intercepts, so they correspondto di7erent vertical lines on the ray-space.

In fact, the ray-space drawn in Figure . is overlaid with vertical strips, where each strip

2D light field study with a linear camera

Page 22: Lytro Light Field Camera: from scientific research to a $50-million business

A photograph is an integral projection of the canonical light field.

Quote from Ng, R. 2006. Digital light field photography.

Page 23: Lytro Light Field Camera: from scientific research to a $50-million business

chapter . light fields and photographs

(a) (b)

Figure .: 5e projection of the light 6eld corresponding to focusing further and closerthan the chosen x parameterization plane for the light 6eld.

Figure from Ng, R. 2006. Digital light field photography. and Wikipedia

Refocusing

Page 24: Lytro Light Field Camera: from scientific research to a $50-million business

A photograph is an integral projection of the canonical light field, where the trajectory of the projection depends on the depth at which the photograph is focused.

Quote from Ng, R. 2006. Digital light field photography.

Page 25: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

Recording the light field.. a plenoptic camera records the light field

Figure .: Sampling of a photograph’s light 5eld provided by a plenoptic camera.

camera the width of a grid column is the width of a photosensor pixel. In the plenopticcamera, on the other hand, the grid cells are shorter and wider. 6e column width is thewidth of a microlens, and the column is vertically divided into the number of pixels acrossthe width of the microlens. In other words, the plenoptic camera sampling grid providesmore speci5city in the u directional axis but less speci5city in the x spatial axis, assuming aconstant number of photosensor pixels.

6is is the fundamental trade-o7 taken by the light 5eld approach to imaging. For a 5xedsensor resolution, collecting directional resolution results in lower resolution 5nal images,with essentially as many pixels as microlenses. On the other hand, using a higher-resolutionsensor allows us to add directional resolution by collecting more data, without necessarilysacri5cing 5nal image resolution. As discussed in the introductory chapter, much higherresolution sensors may be possible in today’s semiconductor technology. Finally, Section .

sampling of a photograph’s light field provided by a plenoptic camera

Page 26: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

chapter . recording a photograph’s light field

(z) (z)

Figure .: Raw light 5eld photograph read o6 the photosensor underneath the microlensarray. 7e 5gure shows a crop of approximately one quarter the full image so that the mi-crolenses are clearly visible in print.

chapter . recording a photograph’s light field

(z) (z)

Figure .: Raw light 5eld photograph read o6 the photosensor underneath the microlensarray. 7e 5gure shows a crop of approximately one quarter the full image so that the mi-crolenses are clearly visible in print.

chapter . recording a photograph’s light field

(z) (z)

Figure .: Raw light 5eld photograph read o6 the photosensor underneath the microlensarray. 7e 5gure shows a crop of approximately one quarter the full image so that the mi-crolenses are clearly visible in print.

Recording the light field

Page 27: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

chapter . recording a photograph’s light field

(z) (z)

Figure .: Raw light 5eld photograph read o6 the photosensor underneath the microlensarray. 7e 5gure shows a crop of approximately one quarter the full image so that the mi-crolenses are clearly visible in print.

chapter . recording a photograph’s light field

(z) (z)

Figure .: Raw light 5eld photograph read o6 the photosensor underneath the microlensarray. 7e 5gure shows a crop of approximately one quarter the full image so that the mi-crolenses are clearly visible in print.

Recording the light field

Integrating on each microlen images gives the conventional photography.

.. three views of the recorded light field

lighting incident upon each microlens. A second, equivalent interpretation is that the light2eld provides pictures of the scene from an array of viewpoints spread over the extent of thelens aperture. Since the lens aperture has a 2nite area, these di3erent views provide someparallax information about the world. 4is leads to the third property of the light 2eld—itprovides information about the depth of objects in the scene.

4e sections below highlight these di3erent interpretations with three visualizations ofthe light 2eld. Each of these visualizations is a 5attening of the d light 2eld into a d array ofimages. Moving across the array traverses two dimensions of the light 2eld, and each imagerepresents the remaining two dimensions of variation.

Raw Light Field Photograph

4e simplest view of the recorded light 2eld is the raw image of pixel values read o3 thephotosensor underneath the microlens array, as shown in Figure .. Macroscopically, theraw image appears like a conventional photograph, focused on the girl wearing the white

Figure .: Conventional photo-graph computed from the light2eld photograph in Figure ..

cap, with a man and a girl blurred in the back-ground. Looking more closely, the raw image is ac-tually composed of an array of disks, where eachdisk is the image that forms underneath one mi-crolens. Each of these microlens images is circu-lar because it is a picture of the round aperture ofthe lens viewed from that position on the 2lm. Inother words, the raw light 2eld photograph is an(x, y) grid of images, where each image shows usthe light arriving at that 2lm point from di3erent(u, v) positions across the lens aperture.

4e zoomed images at the bottom of Figure .show detail in the microlens images in two parts ofthe scene: the nose of the man in the backgroundwho is out of focus (Image z), and the nose of thegirl in the foreground who is in focus (Image z). Looking at Image z, we see that thelight coming from di3erent parts of the lens are not the same. 4e light coming from the

Page 28: Lytro Light Field Camera: from scientific research to a $50-million business

Refocusing

Two sub-aperture photographs obtained from a light field by extracting the shown pixel under each microlens (depicted on left). Note that the images are not the same, but represents different viewpoints.

.. image synthesis algorithms

(a): No refocus (b): Refocus closer (c): Refocus further

Figure .: Shi3-and-add refocus algorithm, illustrated with just two sub-aperture imagesfor didactic purposes.

with both the distance of the sub-aperture from the center of the lens (u, v), and the relativeextent, !, to whichwewant to refocus away from the optical focal plane. Figure . visualizesthe shi3s for three di4erent virtual5lmplanes, summing just two sub-aperture images, x andy, for illustrative purposes. Using two sub-aperture images causes out-of-focus regions toappear as twice-repeated edges rather than a uniform blur, making it easier to see the shi3e4ect. Figure .a corresponds to no refocusing, with ! = 1, and a shi3 of 0 for both sub-aperture images. 7e remaining two images show that the direction of shi3s depends onwhether we are focusing closer or farther to align features at the desired depth.

7e minimal discretization of this algorithm is to shi3-and-add just the ! sub-aperture images in the raw light-5eld shown in Figure .. For most applications, the qualityof the resulting photographs is quite good. However, this process may generate undesired

Figure from Ng, R. 2006. Digital light field photography.

Page 29: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

.. three views of the recorded light field

(a)

(b)

(b)

(b)

Figure .: Overview of processing the recorded light 5eld.

it should be emphasized that the availability of a light 5eld permits ray-tracing simulations ofmuchmore general imaging con5gurations, such as view cameras where the lens and sensorare not parallel, or even imaging where each pixel is focused with a di6erent depth of 5eldor a di6erent depth. 7is 5nal example plays a major role in the next chapter.

3.3 Three Views of the Recorded Light Field

In sampling the light traveling along all rays inside the camera, the light 5eld provides richinformation about the imaged scene. One way to think of the data is that it captures the

Refocusing

chapter . light fields and photographs

(a) (b)

Figure .: 5e projection of the light 6eld corresponding to focusing further and closerthan the chosen x parameterization plane for the light 6eld.

Page 30: Lytro Light Field Camera: from scientific research to a $50-million business

Refocusing

Shift-and-add refocus algorithm, illustrated with just two sub-aperture images for didactic purposes.

.. image synthesis algorithms

(a): No refocus (b): Refocus closer (c): Refocus further

Figure .: Shi3-and-add refocus algorithm, illustrated with just two sub-aperture imagesfor didactic purposes.

with both the distance of the sub-aperture from the center of the lens (u, v), and the relativeextent, !, to whichwewant to refocus away from the optical focal plane. Figure . visualizesthe shi3s for three di4erent virtual5lmplanes, summing just two sub-aperture images, x andy, for illustrative purposes. Using two sub-aperture images causes out-of-focus regions toappear as twice-repeated edges rather than a uniform blur, making it easier to see the shi3e4ect. Figure .a corresponds to no refocusing, with ! = 1, and a shi3 of 0 for both sub-aperture images. 7e remaining two images show that the direction of shi3s depends onwhether we are focusing closer or farther to align features at the desired depth.

7e minimal discretization of this algorithm is to shi3-and-add just the ! sub-aperture images in the raw light-5eld shown in Figure .. For most applications, the qualityof the resulting photographs is quite good. However, this process may generate undesired

Figure from Ng, R. 2006. Digital light field photography.

Page 31: Lytro Light Field Camera: from scientific research to a $50-million business

Refocusing

Examples of refocusing (a1–a5) and extended depth of field (b).

Figure from Ng, R. 2006. Digital light field photography.

chapter . digital refocusing

(a) (a) (a)

(a) (a) (b)

Figure .: Examples of refocusing (a–a) and extended depth of 8eld (b).

lens aperture in the classical photographic method of optically extending the depth of 8eld.Image bwas computed by combining the sharpest portions of images a-a [Agarwala et al.], and can be thought of as refocusing each pixel at the depth of the closest object inthat direction.

A crucial advantage of the digitally extended depth of 8eld photograph over the classicalphotograph is that the former uses the light coming through a larger lens aperture. 9ismeans that recording a light 8eld to obtain high depth of 8eld captures light more e:ciently,allowing less grainy images with higher signal to noise ratio (snr). Sections .–. studythese improvement theoretically and through numerical experimentation, demonstratinglinear improvement with the directional u resolution.

Page 32: Lytro Light Field Camera: from scientific research to a $50-million business

The sub-aperture photographs themselves have infinite DOF,resulted from the minor aperture.

.. image synthesis algorithms

(a): No refocus (b): Refocus closer (c): Refocus further

Figure .: Shi3-and-add refocus algorithm, illustrated with just two sub-aperture imagesfor didactic purposes.

with both the distance of the sub-aperture from the center of the lens (u, v), and the relativeextent, !, to whichwewant to refocus away from the optical focal plane. Figure . visualizesthe shi3s for three di4erent virtual5lmplanes, summing just two sub-aperture images, x andy, for illustrative purposes. Using two sub-aperture images causes out-of-focus regions toappear as twice-repeated edges rather than a uniform blur, making it easier to see the shi3e4ect. Figure .a corresponds to no refocusing, with ! = 1, and a shi3 of 0 for both sub-aperture images. 7e remaining two images show that the direction of shi3s depends onwhether we are focusing closer or farther to align features at the desired depth.

7e minimal discretization of this algorithm is to shi3-and-add just the ! sub-aperture images in the raw light-5eld shown in Figure .. For most applications, the qualityof the resulting photographs is quite good. However, this process may generate undesired

Figure from Ng, R. 2006. Digital light field photography.

Extending the depth of field

Page 33: Lytro Light Field Camera: from scientific research to a $50-million business

Extending the depth of field

Refocusing each pixel gives extended DOF with much higher SNR.

Figure from Ng, R. 2006. Digital light field photography.

chapter . digital refocusing

(a): Unrefocused (b): Sub-aperture image (c): Extended dof

Figure .: Comparison of a sub-aperture image and an image computed with digitally ex-tended depth of 5eld.

numerical integration of Equation ., we obtain high snr by combining the contributionsof the , photons from all over the aperture. 8e resulting image, shown in Figure .c,matches the depth of 5eld of the sub-aperture image, but is far less grainy. In this disser-tation, the phrase digitally extending the depth of !eld will be reserved for the process ofcomputing high depth of 5eld in this high snrmanner.

8e epipolar images in Figure . z provide a visual way to conceptualize this process.Ordinary refocusing corresponds to projecting the entire epipolar image at a speci5c trajec-tory, as described in Chapter . 8is is how the images in Figure .a were produced. Incontrast, digitally extending the depth of 5eld corresponds to projecting each column of theepipolar image along an individual trajectory that best aligns with the local features of thelight 5eld. Figure .b can be thought of as projecting the blue pixels in Figure . z alonga trajectory of negative slope and the brown pixels along a vertical trajectory.

8e implementation of digitally extending the depth of 5eld used in this dissertationbegins by refocusing at all depths in the scene to create a focal stack of images. For example,

Page 34: Lytro Light Field Camera: from scientific research to a $50-million business

In the spatial domain, photographs are integral projections of the light field.

In the Fourier domain, photographs are just 2D slice in the 4D light field.

Quote from Ng, R. 2005. Fourier slice photography.

Page 35: Lytro Light Field Camera: from scientific research to a $50-million business

2D Fourier Transform

Classical Fourier Slice Theorem

1D Fourier Transform

Integral Projection

Slicing

Page from Ng, R. 2006. Digital light field photography.

Fourier slice vs. integral projection

Page 36: Lytro Light Field Camera: from scientific research to a $50-million business

Classical Fourier Slice Theorem

2D Fourier Transform

1D Fourier Transform

Integral Projection

Slicing

Page from Ng, R. 2006. Digital light field photography.

Fourier slice vs. integral projection

Page 37: Lytro Light Field Camera: from scientific research to a $50-million business

Page from Ng, R. 2006. Digital light field photography.

Fourier slice vs. integral projection

Integral Projection

Slicing

Fourier Domain

Spatial Domain 4D Fourier Transform

Inverse 2D Fourier Transform

Page 38: Lytro Light Field Camera: from scientific research to a $50-million business

Page from Ng, R. 2006. Digital light field photography.

Fourier slice vs. integral projection

Page 39: Lytro Light Field Camera: from scientific research to a $50-million business

In the Fourier domain, photographs are just 2D slice in the 4D light field.

That’s much simpler than in the spatial domain, where photographs are integral projections of the light field.

Quote from Ng, R. 2005. Fourier slice photography.

Page 40: Lytro Light Field Camera: from scientific research to a $50-million business

Fourier-domain slicing algorithmPre-process: O(N4 log N)Refocusing: O(N2 log N)

Spatial-domain integration algorithmRefocusing: O(N4)

Quote from Ng, R. 2005. Fourier slice photography.

Page 41: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Ng, R. 2006. Digital light field photography.

Resolution of the light-field sample.. a plenoptic camera records the light field

Figure .: Sampling of a photograph’s light 5eld provided by a plenoptic camera.

camera the width of a grid column is the width of a photosensor pixel. In the plenopticcamera, on the other hand, the grid cells are shorter and wider. 6e column width is thewidth of a microlens, and the column is vertically divided into the number of pixels acrossthe width of the microlens. In other words, the plenoptic camera sampling grid providesmore speci5city in the u directional axis but less speci5city in the x spatial axis, assuming aconstant number of photosensor pixels.

6is is the fundamental trade-o7 taken by the light 5eld approach to imaging. For a 5xedsensor resolution, collecting directional resolution results in lower resolution 5nal images,with essentially as many pixels as microlenses. On the other hand, using a higher-resolutionsensor allows us to add directional resolution by collecting more data, without necessarilysacri5cing 5nal image resolution. As discussed in the introductory chapter, much higherresolution sensors may be possible in today’s semiconductor technology. Finally, Section .

sampling of a photograph’s light field provided by a plenoptic camera

Page 42: Lytro Light Field Camera: from scientific research to a $50-million business

Band-Limited Analysis

Light field shot with camera

Band-width of measured light field

Page from Ng, R. 2006. Digital light field photography.

Band-limited analysis

Page 43: Lytro Light Field Camera: from scientific research to a $50-million business

Band-Limited Analysis

Page from Ng, R. 2006. Digital light field photography.

Band-limited analysis

Page 44: Lytro Light Field Camera: from scientific research to a $50-million business

Page from Ng, R. 2006. Digital light field photography.

Band-limited analysisBand-Limited Analysis

Page 45: Lytro Light Field Camera: from scientific research to a $50-million business

Page from Ng, R. 2006. Digital light field photography.

Band-limited analysisBand-Limited Analysis

Page 46: Lytro Light Field Camera: from scientific research to a $50-million business

Ren Ng improved the theory and technology of light field camera during his doctoral program.

Page 47: Lytro Light Field Camera: from scientific research to a $50-million business

"It was a scientific breakthrough we were working towards.

The next step we've been working on has been making a commercial breakthrough."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 48: Lytro Light Field Camera: from scientific research to a $50-million business

Photo from Lytro.com

Technological evolution: from research to consumer product

early 2000s

Page 49: Lytro Light Field Camera: from scientific research to a $50-million business

Technological evolution: from research to consumer product

mid 2000s

Stanford Tech Report CTSR 2005-02

Figure 7: Technique for ameliorating vignetting. Top: Moving the pinholeobserver beyond the bounds shown in Figure 6 results in vignetting becausesome required rays are unavailable (shaded gray). Bottom: To eliminate thevignetting, we use the closest available rays, by clamping the missing raysto the bounds of the aperture (shaded region). Note that these rays do notpass through the original pinhole, so the resulting multi-perspective imagehas a different center of projection for each ray in the corrected periphery.

approximately 4000!4000 pixels that are 9 microns wide. Our mi-crolens array was made by Adaptive Optics Associates (part 0125-0.5-S). It has 296!296 lenslets that are 125 microns wide, squareshaped, and square packed with very close to 100% fill-factor. Thefocal length of the microlenses is 500 microns, so their f -number isf /4. For the body of our camera we chose a Contax 645, and usedtwo lenses: a 140 mm f /2.8 and 80 mm f /2.0. We chose lenseswith wide maximum apertures so that, even with extension tubesattached for macrophotography, we could achieve an f /4 image-side f -number to match the f -number of the microlenses.

We glued the microlens array to a custom aluminum lens holder,screwed a custom base plate to the digital back over the photosen-sor, and then attached the lens holder to the base plate with threescrews separated by springs (see Figure 8). Adjusting the threescrews provided control over separation and tilt. The screws have56 threads per inch, and we found that we could control separa-tion with a mechanical resolution of 10 microns. Figure 8 shows across-section through the assembled parts.

We calibrated the separation—a one-time procedure—using apinhole light source that produces an array of sharp spots on thesensor (one under each microlens) when the correct separation isachieved. The procedure took 10–20 iterations of screw adjust-ments. We created a high contrast pinhole source by stopping downthe 140 mm main lens to its minimum aperture and attaching 78mm of extension tubes. This creates an aperture of approximatelyf /50, which we aimed at a white sheet of paper.

The final resolution of the light fields that we capture with ourprototype is 292!292 in the spatial st axes, and just under 14!14in the uv directional axes. Figure 9 is a photograph showing ourprototype in use.

5.2 Software

Our first software subsystem produces 4D light fields from the 2Dsensor values. The first step is demosaicking: interpolating RGBvalues at every pixel from the values of the color filter array [Hamil-ton and Adams 1997]. We then correct for slight lateral misalign-ments between the microlens array and the photosensor by rotatingthe raw 2D image (by less than 0.1 degrees), interpolate the imageupwards slightly to achieve an integral number of pixels per mi-crolens, and then dice the array of aligned subimages to produce

Figure 8: Top: Exploded view of assembly for attaching the microlens arrayto the digital back. Bottom: Cross-section through assembled parts.

Figure 9: Our light field camera in use.

the 4D light field, L(u, v, s, t). (s, t) selects the subimage, and(u, v) selects the pixel within the subimage.

The second subsystem processes light fields to produce final pho-tographs. Our various implementations of synthetic photographyare simply different numerical techniques for approximating Equa-tions 5 and Equation 6.

In the case of refocusing, we find that the traditional shiftingand summing the sub-aperture images as in previous work [Vaishet al. 2004; Levoy et al. 2004] works well in most cases. For largemotions in the focal plane, it can leave noticeable artifacts in blurredregions due to undersampling of the directional variation. For betterimage quality (with longer integration times), we use higher-orderquadrature techniques, such as supersampling with a quadrilinearreconstruction filter.

To account for vignetting, we normalize the images by divid-ing each pixel by the fraction of integrated rays that fall within thebounds of the acquired light field. This eliminates darkening of bor-ders in refocusing, for instance. This technique breaks down in thecase of classical light field rendering, where we use the method ofusing closest available rays as replacements for vignetted rays, asdescribed in Section 4.3.

Finally, we have experimented with producing extended depthof field images, by refocusing a light field at multiple depthsand applying the digital photomontage technique of Agarwalaet al. [2004]. Although we could produce extended depth of fieldsimply by extracting a single sub-aperture image, this techniquewould be noisier because it integrates less light. Figure 15 of Sec-tion 7 illustrates this phenomenon.

6

Stanford Tech Report CTSR 2005-02

Figure 7: Technique for ameliorating vignetting. Top: Moving the pinholeobserver beyond the bounds shown in Figure 6 results in vignetting becausesome required rays are unavailable (shaded gray). Bottom: To eliminate thevignetting, we use the closest available rays, by clamping the missing raysto the bounds of the aperture (shaded region). Note that these rays do notpass through the original pinhole, so the resulting multi-perspective imagehas a different center of projection for each ray in the corrected periphery.

approximately 4000!4000 pixels that are 9 microns wide. Our mi-crolens array was made by Adaptive Optics Associates (part 0125-0.5-S). It has 296!296 lenslets that are 125 microns wide, squareshaped, and square packed with very close to 100% fill-factor. Thefocal length of the microlenses is 500 microns, so their f -number isf /4. For the body of our camera we chose a Contax 645, and usedtwo lenses: a 140 mm f /2.8 and 80 mm f /2.0. We chose lenseswith wide maximum apertures so that, even with extension tubesattached for macrophotography, we could achieve an f /4 image-side f -number to match the f -number of the microlenses.

We glued the microlens array to a custom aluminum lens holder,screwed a custom base plate to the digital back over the photosen-sor, and then attached the lens holder to the base plate with threescrews separated by springs (see Figure 8). Adjusting the threescrews provided control over separation and tilt. The screws have56 threads per inch, and we found that we could control separa-tion with a mechanical resolution of 10 microns. Figure 8 shows across-section through the assembled parts.

We calibrated the separation—a one-time procedure—using apinhole light source that produces an array of sharp spots on thesensor (one under each microlens) when the correct separation isachieved. The procedure took 10–20 iterations of screw adjust-ments. We created a high contrast pinhole source by stopping downthe 140 mm main lens to its minimum aperture and attaching 78mm of extension tubes. This creates an aperture of approximatelyf /50, which we aimed at a white sheet of paper.

The final resolution of the light fields that we capture with ourprototype is 292!292 in the spatial st axes, and just under 14!14in the uv directional axes. Figure 9 is a photograph showing ourprototype in use.

5.2 Software

Our first software subsystem produces 4D light fields from the 2Dsensor values. The first step is demosaicking: interpolating RGBvalues at every pixel from the values of the color filter array [Hamil-ton and Adams 1997]. We then correct for slight lateral misalign-ments between the microlens array and the photosensor by rotatingthe raw 2D image (by less than 0.1 degrees), interpolate the imageupwards slightly to achieve an integral number of pixels per mi-crolens, and then dice the array of aligned subimages to produce

Figure 8: Top: Exploded view of assembly for attaching the microlens arrayto the digital back. Bottom: Cross-section through assembled parts.

Figure 9: Our light field camera in use.

the 4D light field, L(u, v, s, t). (s, t) selects the subimage, and(u, v) selects the pixel within the subimage.

The second subsystem processes light fields to produce final pho-tographs. Our various implementations of synthetic photographyare simply different numerical techniques for approximating Equa-tions 5 and Equation 6.

In the case of refocusing, we find that the traditional shiftingand summing the sub-aperture images as in previous work [Vaishet al. 2004; Levoy et al. 2004] works well in most cases. For largemotions in the focal plane, it can leave noticeable artifacts in blurredregions due to undersampling of the directional variation. For betterimage quality (with longer integration times), we use higher-orderquadrature techniques, such as supersampling with a quadrilinearreconstruction filter.

To account for vignetting, we normalize the images by divid-ing each pixel by the fraction of integrated rays that fall within thebounds of the acquired light field. This eliminates darkening of bor-ders in refocusing, for instance. This technique breaks down in thecase of classical light field rendering, where we use the method ofusing closest available rays as replacements for vignetted rays, asdescribed in Section 4.3.

Finally, we have experimented with producing extended depthof field images, by refocusing a light field at multiple depthsand applying the digital photomontage technique of Agarwalaet al. [2004]. Although we could produce extended depth of fieldsimply by extracting a single sub-aperture image, this techniquewould be noisier because it integrates less light. Figure 15 of Sec-tion 7 illustrates this phenomenon.

6

Figures from Ng, R. 2006. Digital light field photography.

Page 50: Lytro Light Field Camera: from scientific research to a $50-million business

Technological evolution: from research to consumer product

2011: LytroImages from Lytro.com

Page 51: Lytro Light Field Camera: from scientific research to a $50-million business

Figure from Lytro.com

Technological evolution: from research to consumer product

2011: Lytro

Page 52: Lytro Light Field Camera: from scientific research to a $50-million business

The company was founded in 2006.It has raised approximately $50 million of venture capital.

Its first camera went on sale October 19, 2011,and began shipping on February 29, 2012,

starting with a very affordable price of $399.

Page 53: Lytro Light Field Camera: from scientific research to a $50-million business

"At first we'll be making those decisions for the user - so that we can make the process as simple as possible but, further down the line, we'll provide tools to give more control over the final output.

It's important to understand that Lytro's camera will record full light fields at day one."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 54: Lytro Light Field Camera: from scientific research to a $50-million business

LYTRO User Manual 13

Lytro Desktop ApplicationLytro Desktop

1. Install the Lytro application. The Lytro Desktop application is required to interact with the light field data format (.lfp). The Lytro Desktop software comes on the Lytro camera. The install window will pop up the first time you plug in your camera into your computer. You can then start the install process. If it doesn’t, find the disk image on your desktop to start the install.

2. Unplug and re-plug in the camera. After you install the software, you must plug the camera back in to prompt an import of your first light field pictures.

3. Back up process begins. A back up process will start after the first time the Lytro Desktop software runs. This happens only the first time you plug your camera in and takes about 4-5 minutes.

Reminder: The minimum spec is Mac OS X 10.6.6 or higher.

Images from Lytro User Manual and Lytro.com

Page 55: Lytro Light Field Camera: from scientific research to a $50-million business

"We're very keen to see light field images develop through an ecosystem of software."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 56: Lytro Light Field Camera: from scientific research to a $50-million business

Back to its limitation...

+ Refocusing

+ Speed

+ Low-light sensitivity

+ 3D images

- Low resolution

+ Easy sharing

+ Extending depth of field

Solution: much higher density of microlenses and sensors

Page 57: Lytro Light Field Camera: from scientific research to a $50-million business

"It's not technological limitations that are defining that figure, it's a marketing-driven progression.

If you applied the technology being developed for mobile phone cameras and applied it to an APS-C sensor, you could in theory make a sensor with hundreds of millions of pixels."

Ren Ng, founder and CEO of Lytro

Photo and quote from dpreview.com

Page 58: Lytro Light Field Camera: from scientific research to a $50-million business

Or think about Nokia’s 40-megapixel 808 PureView!

Pictures from Nokia

Page 59: Lytro Light Field Camera: from scientific research to a $50-million business

Revolution led by the crazy one

Photo from NewDealDesign

“Lytro is developing a new type of camera that dramatically changes photography for the first time since the 1800s.”

–TechCrunch

Page 61: Lytro Light Field Camera: from scientific research to a $50-million business

Light Field Camerafrom scientific research to a $50-million business

SHI Weili, Tsinghua UniversityPhoto from Lytro.com