Mobile 3D Vision 3D Scene Reconstruction with a Trifocal View on a Mobile Device Sebastian Otte · Ulrich Schwanecke · Peter Barth Fig. 1 Capturing 3D data using a smartphone and our polychromatic three-view combiner. From left to right: User taking a picture, main menu of our smartphone app, the image taken, resulting 3D data set. Abstract This paper presents a single-shot method for three-dimensional (3D) shape acquisition. The pro- posed approach is capable to transform standard smart- phones equipped with a camera into a 3D measuring de- vice. It is based on two components. First, a hardware device which we call polychromatic three-view combiner, captures three different views of the observed scene and merges them together into one single superimposed color-image of the scene. Second, a software application running on the smartphone, that separates the super- imposed views into three greyscale-images, determines 3D data using three-view geometry, and visualizes them on the mobile device. 1 Introduction Collecting 3D shape information has become increas- ingly popular in almost all areas of science and tech- nology and is now becoming widely available for con- sumers as well. Many systems for 3D scanning are avail- able which are based on numerous different methods like multi-view stereo, structured light projection, pho- tometric stereo, or time of flight approaches. For an overview of 3D shape acquisition methods see for ex- ample [9,2]. Most of these technologies are based on costly dedicated hardware setups. In addition, they re- quire extensive calibration procedures. Recently, some Sebastian Otte · Ulrich Schwanecke · Peter Barth Dept. of Design, Computer Science and Media RheinMain University of Applied Sciences E-mail: [email protected]more convenient approaches have emerged such as the hand-held photometric stereo camera presented in [7]. This approach works with a simple hardware setup con- sisting of a camera with a light source mounted on it and despite its simplicity it produces high-quality re- sults. Furthermore, first applications like e.g. the tri- mensional 1 App on the iPhone were introduced, that capture 3D information with a standard smartphone. Here, a shape from shading approach is used, that cap- tures the 3D information based on a series of four pic- tures taken with the same camera pose but different lighting conditions. A major drawback of all the for- merly described methods and applications without ded- icated hardware is that they need several images – ei- ther from the same camera pose with different illumi- nation conditions or from different camera poses with the same illumination condition – to gather 3D infor- mation. In real life scenarios this most certainly causes moving artefacts which may render the resulting 3D information unusable. Without dedicated hardware, 3D information will contain artefacts caused by moving the camera or ob- ject respectively or by illumation changes. Thus, to reli- ably capture 3D information dedicated hardware needs to be employed. Indeed, some new smartphones become available like the LG Thrill and the HTC EVO 3D, which are equipped with a stereo camera system. There- fore, they can reliably capture 3D information with one single shot. 1 www.trimensional.com
8
Embed
Mobile 3D Vision - Computer Vision and Mixed Reality Group
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Mobile 3D Vision3D Scene Reconstruction with a Trifocal View on a Mobile Device
Sebastian Otte · Ulrich Schwanecke · Peter Barth
Fig. 1 Capturing 3D data using a smartphone and our polychromatic three-view combiner. From left to right: User taking a picture,
main menu of our smartphone app, the image taken, resulting 3D data set.
Abstract This paper presents a single-shot method
for three-dimensional (3D) shape acquisition. The pro-
posed approach is capable to transform standard smart-
phones equipped with a camera into a 3D measuring de-
vice. It is based on two components. First, a hardware
device which we call polychromatic three-view combiner,
captures three different views of the observed scene
and merges them together into one single superimposed
color-image of the scene. Second, a software application
running on the smartphone, that separates the super-
imposed views into three greyscale-images, determines
3D data using three-view geometry, and visualizes them
on the mobile device.
1 Introduction
Collecting 3D shape information has become increas-
ingly popular in almost all areas of science and tech-
nology and is now becoming widely available for con-
sumers as well. Many systems for 3D scanning are avail-
able which are based on numerous different methods
like multi-view stereo, structured light projection, pho-
tometric stereo, or time of flight approaches. For an
overview of 3D shape acquisition methods see for ex-
ample [9,2]. Most of these technologies are based on
costly dedicated hardware setups. In addition, they re-
quire extensive calibration procedures. Recently, some
3. Wilhelm Burger. Digital image processing an algorithmic
introduction using Java. Springer,, New York :, 2008.
4. M. A. Fischler and R. C. Bolles. Random sample consensus: aparadigm for model fitting with applications to image analy-sis and automated cartography. Commun. ACM, 24:381–395,June 1981.
5. C Harris and M Stephens. A combined corner and edge detec-
tion. In Proceedings of The Fourth Alvey Vision Conference,pages 147–151, 1988.
6. R. I. Hartley and A. Zisserman. Multiple View Geometryin Computer Vision. Cambridge University Press, ISBN:
0521623049, 2000.
7. T. Higo, Y. Matsushita, N. Joshi, and K. Ikeuchi. K.: A
hand-held photometric stereo camera for 3-d modeling. In
In: Proc. ICCV., 2009.8. Mingxing Hu and Baozong Yuan. Robust estimation of tri-
focal tensor using messy genetic algorithm. In 16th Inter-
national Conference on Pattern Recognition, 2002. Proceed-ings, volume 4, pages 347– 350 vol.4. IEEE, 2002.
9. B. Jahne. Digital image processing (6. ed.). Springer, 2005.
10. A. A. Krupev and A. A. Popova. Ghosting reduction andestimation in anaglyph stereoscopic images. In 2008 IEEEInternational Symposium on Signal Processing and Informa-tion Technology, pages 375–379, 2008.
11. M. I.A Lourakis and A. A Argyros. Fast trifocal tensor esti-
mation using virtual parallax. In IEEE International Con-ference on Image Processing, 2005. ICIP 2005, volume 2,
pages II– 93–6. IEEE, September 2005.
12. D. G Lowe. Object recognition from local scale-invariantfeatures. In The Proceedings of the Seventh IEEE Inter-
national Conference on Computer Vision, 1999, volume 2,pages 1150–1157 vol.2. IEEE, 1999.
13. Timo Ojala, Matti Pietikinen, and Topi Menp. Multires-
olution gray-scale and rotation invariant texture classifica-
tion with local binary patterns. IEEE TRANSACTIONSON PATTERN ANALYSIS AND MACHINE INTELLI-
GENCE, 24(7):971—987, 2002.14. William Press. Numerical recipes : the art of scientific com-
puting. Cambridge University Press, Cambridge UK ;;New
York, 3rd ed. edition, 2007.15. E. Rosten and T. Drummond. Machine learning for high-
speed corner detection. In European Conference on Com-