Top Banner
Walking through Sight: Seeing the Ability to See, in a 3-D Augmediated Reality Environment Ryan Janzen, Seyed Nima Yasrebi, Avishek Joey Bose, Arjun Subramanian, Steve Mann Department of Electrical and Computer Engineering University of Toronto Abstract—A 3-dimensional augmediated reality (AR) environ- ment was designed, where users can physically walk through space, and by looking through AR glasses can see and interact with hidden veillance flux produced by cameras. Surveillance cameras, physically attached to a room, as well as handheld and body-worn cameras, each emit veillance flux and have a veillance field — a mathematical formulation of their capacity to see, as that capacity to see propagates through space. In this work, for the first time, we bring the mathematical veillance field into 3D augmediated reality, to “see sight”. I. I NTRODUCTION Recently we introduced veillance flux and the veillance field, a mathematical formulation to account for the ability to see as it propagates through space from a camera [1], [4]. An intersecting web of veillance field lines and veillance flux, ordinarily hidden in the world around us, is “emitted” by various surveillance cameras, sousveillance cameras (e.g. body-worn cameras [2][3]), and embedded vision on hands- free doors, faucets, and lighting systems. The first measurements of the veillance field employed a laser-scanning method, to quantify the ability to see, as it propagated through space from cameras [1]. Bio-veillametrics and bio-veilluminescence were first published in [4], revealing the 3D veillance field emitted by a human eye. In this new work, we visualize veillance fields from cameras in real-time, to make seeing visible: to “visualize vision” and “see sight.” II. SEEING AND MEASURING SIGHT: I NITIAL CALIBRATION OF THE AR ENVIRONMENT AR setup involves the placement of cameras and the initial test and measurement of their veillance flux. We employ a combination of dome-enclosure cameras, bracket-mounted cameras, and handheld cameras operated by users. To first detect veillance flux emitted by those cameras, we use a combination of veillametrics [1] and field-of-view detec- tion based on an array of LEDs with a video feedback loop, in a “video bug sweeper”, analogous to the audio bug sweepers used to detect hidden microphones [4]. Abakography [5] is then used to render a 2D visualization if desired (e.g. Fig. 1a). Finally, to prepare for AR rendering, veillance flux emitted by each camera-under-test is vectorized, by marking its edges using a handheld marker beacon and a 3D depth sensor. III. EGOGRAPHIC AND EXTROGRAPHIC AUGMEDIATED REALITY TO VISUALIZE THE VEILLANCE FIELD Rendering veillance flux in 3D, as well as marking and tracking it spatially, is performed in our system using both egographic (body-mounted, outward facing) depth sensors, and extrographic (environment-mounted) sensors. For an egographic sensor we use the 3D depth-sensing Meta 1 glasses (worn on the head to track camera/beacon positions), which also serve as a see-through AR display. For an extrographic sensor, we have two implementations: one design uses a tablet computer programmed to optically recognize and track the motion of cameras-under-test directly for deducing the motion of veillance (Fig. 3); the other design uses a stationary 3D camera to track users’ bodies in absolute position. The relative position vector from the egographic sensor is added to the absolute position vector from the extrographic body sensor, to give a final absolute position, during the calibration stage and during the real-time AR experience when tracking stationary and moving cameras. Once veillance-field calibration is complete, the AR ex- perience can begin. Veillance emissions from cameras are visualized along with markup statistics (Fig. 3). Users can also point their own head-worn cameras at others to photograph them (i.e. to “shoot” a photo and emit veillance flux). The system tracks the position of the head-worn cameras using the inertial measurement unit (IMU) in each set of Meta glasses. Augmediated reality (AR) veillance flux is rendered though each user’s AR display from the perspective of his/her current position, rendered stereoscopically using Unity3D, orienting in space in real-time through a combination of IMU readings and optical tracking. IV. FURTHER INFORMATION Video demonstrations can be seen at: http://veillametrics.com REFERENCES [1] R. Janzen and S. Mann, “Veillance flux, vixels, veillons: An information- bearing extramissive formulation of sensing, to measure surveillance and sousveillance,” Proc. IEEE CCECE2014, May 4-7 2014, 10 pages. [2] S. Mann, J. Nolan, and B. Wellman, “Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments.” Surveillance & Society, vol. 1, no. 3, pp. 331–355, 2003. [3] V. Bakir, Sousveillance, media and strategic political communication: Iraq, USA, UK. Continuum International Publishing Group, 2010. [4] R. Janzen and S. Mann, “Veillance dosimeter, inspired by body-worn radiation dosimeters, to measure exposure to inverse light,” Proc. IEEE GEM2014; to appear, 3 pages, 2014. [5] S. Mann, R. Janzen, T. Ai, S. N. Yasrebi, J. Kawwa, and M. A. Ali, “Toposculpting,” Proc. IEEE CCECE2014, May 4-7 2014, 10 pages.
2

Walking through Sight: Seeing the Ability to See, in a 3-D ...ryan/veillance/VeillanceAR_JanzenEtal...by various surveillance cameras, sousveillance cameras (e.g. body-worn cameras

Sep 16, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Walking through Sight: Seeing the Ability to See, in a 3-D ...ryan/veillance/VeillanceAR_JanzenEtal...by various surveillance cameras, sousveillance cameras (e.g. body-worn cameras

Walking through Sight: Seeing the Ability to See,in a 3-D Augmediated Reality Environment

Ryan Janzen, Seyed Nima Yasrebi, Avishek Joey Bose, Arjun Subramanian, Steve MannDepartment of Electrical and Computer Engineering

University of Toronto

Abstract—A 3-dimensional augmediated reality (AR) environ-ment was designed, where users can physically walk throughspace, and by looking through AR glasses can see and interactwith hidden veillance flux produced by cameras. Surveillancecameras, physically attached to a room, as well as handheld andbody-worn cameras, each emit veillance flux and have a veillancefield — a mathematical formulation of their capacity to see, asthat capacity to see propagates through space.

In this work, for the first time, we bring the mathematicalveillance field into 3D augmediated reality, to “see sight”.

I. INTRODUCTION

Recently we introduced veillance flux and the veillancefield, a mathematical formulation to account for the abilityto see as it propagates through space from a camera [1], [4].

An intersecting web of veillance field lines and veillanceflux, ordinarily hidden in the world around us, is “emitted”by various surveillance cameras, sousveillance cameras (e.g.body-worn cameras [2][3]), and embedded vision on hands-free doors, faucets, and lighting systems.

The first measurements of the veillance field employeda laser-scanning method, to quantify the ability to see, as itpropagated through space from cameras [1]. Bio-veillametricsand bio-veilluminescence were first published in [4], revealingthe 3D veillance field emitted by a human eye. In this newwork, we visualize veillance fields from cameras in real-time,to make seeing visible: to “visualize vision” and “see sight.”

II. SEEING AND MEASURING SIGHT:INITIAL CALIBRATION OF THE AR ENVIRONMENT

AR setup involves the placement of cameras and the initialtest and measurement of their veillance flux. We employa combination of dome-enclosure cameras, bracket-mountedcameras, and handheld cameras operated by users.

To first detect veillance flux emitted by those cameras, weuse a combination of veillametrics [1] and field-of-view detec-tion based on an array of LEDs with a video feedback loop, ina “video bug sweeper”, analogous to the audio bug sweepersused to detect hidden microphones [4]. Abakography [5] isthen used to render a 2D visualization if desired (e.g. Fig. 1a).Finally, to prepare for AR rendering, veillance flux emittedby each camera-under-test is vectorized, by marking its edgesusing a handheld marker beacon and a 3D depth sensor.

III. EGOGRAPHIC AND EXTROGRAPHIC AUGMEDIATEDREALITY TO VISUALIZE THE VEILLANCE FIELD

Rendering veillance flux in 3D, as well as marking andtracking it spatially, is performed in our system using both

egographic (body-mounted, outward facing) depth sensors, andextrographic (environment-mounted) sensors.

For an egographic sensor we use the 3D depth-sensingMeta 1 glasses (worn on the head to track camera/beaconpositions), which also serve as a see-through AR display.

For an extrographic sensor, we have two implementations:one design uses a tablet computer programmed to opticallyrecognize and track the motion of cameras-under-test directlyfor deducing the motion of veillance (Fig. 3); the otherdesign uses a stationary 3D camera to track users’ bodiesin absolute position. The relative position vector from theegographic sensor is added to the absolute position vectorfrom the extrographic body sensor, to give a final absoluteposition, during the calibration stage and during the real-timeAR experience when tracking stationary and moving cameras.

Once veillance-field calibration is complete, the AR ex-perience can begin. Veillance emissions from cameras arevisualized along with markup statistics (Fig. 3). Users can alsopoint their own head-worn cameras at others to photographthem (i.e. to “shoot” a photo and emit veillance flux). Thesystem tracks the position of the head-worn cameras using theinertial measurement unit (IMU) in each set of Meta glasses.

Augmediated reality (AR) veillance flux is rendered thougheach user’s AR display from the perspective of his/her currentposition, rendered stereoscopically using Unity3D, orienting inspace in real-time through a combination of IMU readings andoptical tracking.

IV. FURTHER INFORMATION

Video demonstrations can be seen at:

http://veillametrics.com

REFERENCES

[1] R. Janzen and S. Mann, “Veillance flux, vixels, veillons: An information-bearing extramissive formulation of sensing, to measure surveillance andsousveillance,” Proc. IEEE CCECE2014, May 4-7 2014, 10 pages.

[2] S. Mann, J. Nolan, and B. Wellman, “Sousveillance: Inventing andusing wearable computing devices for data collection in surveillanceenvironments.” Surveillance & Society, vol. 1, no. 3, pp. 331–355, 2003.

[3] V. Bakir, Sousveillance, media and strategic political communication:Iraq, USA, UK. Continuum International Publishing Group, 2010.

[4] R. Janzen and S. Mann, “Veillance dosimeter, inspired by body-wornradiation dosimeters, to measure exposure to inverse light,” Proc. IEEEGEM2014; to appear, 3 pages, 2014.

[5] S. Mann, R. Janzen, T. Ai, S. N. Yasrebi, J. Kawwa, and M. A. Ali,“Toposculpting,” Proc. IEEE CCECE2014, May 4-7 2014, 10 pages.

Page 2: Walking through Sight: Seeing the Ability to See, in a 3-D ...ryan/veillance/VeillanceAR_JanzenEtal...by various surveillance cameras, sousveillance cameras (e.g. body-worn cameras

Fig. 1. Augmediated Reality rendering of veillance flux: (a) Raster representation of veillance flux [1] from a security camera, detected in 3D by combiningveillametrics [1] with abakography [5]. (b) Vector representation of veillance field, rendered in 3D for real-time augmediated-reality (AR) visualization ofinvisible veillance fields as a user (c) walks through them in physical space. (d) depicts the veillance vector markup process, completed while setting up the ARenvironment.

C1 C2

SURVEILLANCE CAMERAS

VEILLANCEVISUALIZERCOMP.

LIGHT−STIC

K

VEILLANCE VISU

ALIZER

EGOGRAPHIC3D CAMERA

ARDISPLAYIMU

BEACONLOCALIZATION(LOCAL)

MARKING BEACON

VECTORMEM.

BODYLOCALIZATION(ABSOLUTE)

LOCATION & ORIENTATIONTRACKING

VECTOR:HEAD−TO−BEACON

WEARABLECOMPUTER

CENTRAL SERVERCOMPUTER(FOR MULTI−PLAYER ENV.)

3DREND−ERING

MULTI−PLAYERPOSITION& VEILLANCE DATABASE

ADDITIONALPLAYERS

(FOR INITIAL SETUPAND CALIBRATION ONLY)

USER WEARINGAR GLASS

ABSOLUTE REF. FRAME3D SENSOR

PERSP−ECTIVE

VECTOR:ABSOLUTE−TO−HEAD

EGOGRAPHICA.R. GLASSES

Fig. 2. (LEFT) Generalized signal flow of Veillance Augmediated Reality based on vision and IMU (inertial meas. unit) data, to be able to track camerasand participants’ bodies for perspective rendering. (RIGHT) Veillance flux 3D perspective rendering, using experimentally-measured data from the light-stickveillance visualizer (shown in diagram) measured at night and re-rendered the next day from a different perspective using image-based rendering (IBR).

Fig. 3. Real-time augmented reality to see veillance flux and veillance fields: View of the AR perspective where the system is overlaying statistics aboutveillance flux emitted from two different cameras. These veillance field reality-augmentations, rendered in 3D, track the cameras as the user moves. This is atype of veillametrics [1]: measuring the ability to measure, and seeing the ability to see.

Fig. 4. Progressive marking of a veillance field, after veillance measurement, in order to set up and begin the AR experience of veillance. This figure illustratesa method to create 3D vector renderings of the veillance field border, as opposed to our other methods that display a raster abakograph of veillance flux.