Natural and ecological human - agent interaction: case studies in virtual and augmented reality environments Manuela Chessa, PhD University of Genoa - Department of Informatics, Bioengineering, Robotics and System Engineering (DIBRIS) Tutorial: Active Vision and Human Robot Collaboration
72
Embed
Natural and ecological human-agent interaction: case ... · Natural and ecological human-agent interaction: case studies in virtual and augmented reality environments Manuela Chessa,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Natural and ecological human-agent interaction: case studies in virtual and augmented reality
environmentsManuela Chessa, PhD
University of Genoa - Department of Informatics, Bioengineering, Robotics and System Engineering (DIBRIS)
Tutorial: Active Vision and Human Robot Collaboration
Vision
Visual perception
The experimental evidence
guides the design of visual
technologies that diminish
the visual fatigue
The experimental evidence
guides the design of
artificial vision systems
that have real-world
performances
Human-computer interfaces
(based on VR/AR/MR)
Bio-inspired computer vision
(based on neural paradigms)
My research: Visual perception
Today’s tutorialPerceptual aspects of Human-Agent Interaction and techniques to
achieve a natural and ecological interaction
(case studies in Virtual and Augmented Reality)
Outline of this tutorial
• Brief overview of Human-Agent Interaction
• New devices and open (research) issues
• Case studies: natural and ecological interaction in virtual and augmented reality environments
• Visual feedback for proprioception in VR with HMD
• HMDs typically consist of goggles with small monitors mounted infront of each eye and a motion tracking system that computes theposition and movements of the use ’s head.
• In the past, good quality meant high prices (e.g. military applications)
• Oculus Research is offering competitive funding to advance basicresearch into a number of areas of perception science that impact thedevelopment of virtual reality platforms. The areas of research includeself-motion, binocular eye-movements, multisensory perception andbiological motion in social interaction
• We are particularly interested in research that utilizes a combinationof computational and psychophysical approaches. One of our goals isto stimulate engagement from the vision science, cognitive science,and related fields on these important topics for virtual reality.
Besides the standard applications of Augmented Reality, several facts shouldbe investigated:
1. How do people perceive the digital contents added to the real world? Isthis perception coherent and stable? Which are the consequences of anon coherent overlapping between virtual and real contents?
2. How do people interact with Augmented Reality? Is a Natural Interactionwith such systems possible?
3. What happens if we try to use Augmented Reality not in the peripersonalspace but in a wider area? Is it feasible to walk into Augmented Reality(maintaining a coherent perception)? Which Computer Vision aspectsbecome crucial to address it?
4. Does immersive Augmented Reality cause the fatigue and cybersicknessissues typical of VR headsets?
• New off-the-shelf devices and virtual/augmented reality technologies areavailable
• They simplify the interface between the systems and the users, which caninteract with the virtual environment through different modalities (movementsof the body)
• The aim of current HCIs through mixed reality is:
• To create environments and situations reasonably similar to those of thereal world (ecological validity)
• To make both qualitative and quantitative improvements in daily activitiesof healthy and impaired people
In the actual systems some attempts to address the issue:
Solution proposed by my group: to acquire the 3D position of the userby using a Kinect and a Leap motion and to insert in real-time the 3Davatar inside the VR.
• The aim of the work is to create a virtual environment, in which theuser can visually perceive his/her own virtual body, and can interactwith the virtual objects by using a body that reproduces his/hermovements.
• In particular, the focus is on having a realistic, ecological and naturalinteraction through all the body and, in addition, a fine interactionwith the hands.
M. Chessa, L. Caroggio, H. Huang and F. Solari (2016) Insert your own body in the Oculus Rift to improve proprioception.
International Conference on Computer Vision Theory and Applications, VISAPP 2016, 27th-29th February2014, Rome.
Case study 1: Proprioception in VR with HMD
Body tracking
• The data stream for tracking the user is acquired by a Microsoft Kinect;
• Synchronization of depth and color images with a resolution of 640x480 pixelsat a frame rate of 30Hz;
• The images are analyzed by the Kinect MS-SDK Assets, available in the Unityasset Store, which uses the Kinect Runtime provided by Microsoft to make thetracking information suitable to move an avatar;
• The asset gives information on the tracking of 20 joints of the users body,which are then aligned with those acquired by the Leap Motion.
M. Chessa, L. Caroggio, H. Huang and F. Solari (2016) Insert your own body in the Oculus Rift to improve proprioception.
International Conference on Computer Vision Theory and Applications, VISAPP 2016, 27th-29th February2014, Rome.
Case study 1: Proprioception in VR with HMD
Hand tracking
• The accurate tracking of the users hands and fingers are performed by theLeap Motion;
• It is composed of two wide-angle infrared CCD cameras with an acquisitionfrequency of about 120 Hz and the detection field is approximately ahemisphere of 0.5m radius above the sensor;
• The accuracy for the position detection of the fingertips is approximately0.01mm.
M. Chessa, L. Caroggio, H. Huang and F. Solari (2016) Insert your own body in the Oculus Rift to improve proprioception.
International Conference on Computer Vision Theory and Applications, VISAPP 2016, 27th-29th February2014, Rome.
Case study 1: Proprioception in VR with HMD
Calibration and registration
• To align the data of all sensors, the calibration phase is performed intwo steps:
1. Rigid transformation between common points acquired by the Kinect andthe Leap Motion, computed just once.
2. Live corrections to overcome the residual offset present between the Kinectand the Leap Motion tracking, and management of the head position overtime, computed every frame.
M. Chessa, L. Caroggio, H. Huang and F. Solari (2016) Insert your own body in the Oculus Rift to improve proprioception.
International Conference on Computer Vision Theory and Applications, VISAPP 2016, 27th-29th February2014, Rome.
To compute the rigid
transformation, we use the
least-square rigid motion
usi g SVD te h i ue.
Own avatar inside the VR
environment, after the calibration
and the rigid transformation
Case study 1: Proprioception in VR with HMD
• The result of the rigid transformation, performed on a single set ofsamples, often leads to have an alignment visually incorrect; this isdue to multiple factors:
• possible coplanar structure of the points: hands, wrists, elbows almost on the same plane;
• noise that affects the Kinect joints: in particular hands and wrists;
• frequent mismatch between the centers of the hands, due to the noise and to the worse accuracy of the Kinect, with respect to the Leap Motion.
M. Chessa, L. Caroggio, H. Huang and F. Solari (2016) Insert your own body in the Oculus Rift to improve proprioception.
International Conference on Computer Vision Theory and Applications, VISAPP 2016, 27th-29th February2014, Rome.
Case study 2To act in a Natural way inside HMDs, VR should be sufficiently immersive
and should not cause cybersickness
Case study 2: Immersivity and cybersickness in VR
• Given the widespread diffusion of new and inexpensive devicesoriginally designed for games and entertainment, it is of generalinterest to test whether these devices can be effectively used incontexts different from the ones for which they were designed.
• The aim of this work is to investigate whether the Oculus Rift HMDcan generate a perceptual experience similar to that experienced inthe real world.
M. Chessa, G. Maiello, A. Borsari, PJ Bex (2016) The Perceptual Quality of the Oculus Rift for Immersive
Virtual Reality. Human Computer Interaction, pp. 1-32
Case study 2: Immersivity and cybersickness in VR
• Immersivity. Is the Oculus Rift able to make the user feel as if he orshe is in a real-world scenario? Is it possible to elicit in the user thesensation of presence via the virtual stimuli rendered by the device?
• Cybersickness. Does VR experienced through the Oculus Rift inducephysical discomfort to the user?
• The heart rate of human observers increased during the exposure to virtualscenarios experienced via the Oculus Rift. This result is consistent with previousliterature that links physiological activation with levels of immersion [Gorini et al.,2011]
• The self-reported answers to a specifically devised immersivity questionnaireshow that the majority of participants felt the experience was immersive andrealistic.
• We observed a significant correlation between self-reported fear of heights andthe sensation of vertigo experienced in one of two virtual scenarios involvingheights.
• Observers were reactive to virtual objects placed in their path.
M. Chessa, G. Maiello, A. Borsari, PJ Bex (2016) The Perceptual Quality of the Oculus Rift for Immersive
Virtual Reality. Human Computer Interaction, pp. 1-32
• The Oculus Rift did not induce simulator sickness symptoms whenobservers were viewing the train scenario.
• Compared to other VR systems, namely, a wide-screen 3DTV and aGoogle Cardboard, the Oculus Rift elicited a greater sensation ofimmersion and similar levels of physiological activation.
• The preliminary investigation of the Oculus Rift HMD described in thisarticle suggests that the Oculus has great potential for employment inan array of basic research [Kim et al., 2015] and clinical applications[Hoffman et al., 2014]
M. Chessa, G. Maiello, A. Borsari, PJ Bex (2016) The Perceptual Quality of the Oculus Rift for Immersive
Virtual Reality. Human Computer Interaction, pp. 1-32
Case study 4How to improve the natural perception of virtual images
Case study 4: Natural perception of VR stimuli
• In natural viewing, humans continuously vary accommodation andeye position to bring into focus on the fovea an image of what iscurrently being fixated.
• All objects in the visual field that are at the accommodative distancewill, in first approximation, form sharp images on the retinae.
• Images of objects that are closer or farther than the accommodativedistance will instead be out of focus.
G. Maiello, M. Chessa, F. Solari, P. Bex. Simulated disparity and peripheral blur interact during binocular fusion. Journal of Vision, 14(8)13, July 17, 2014.
G. Maiello, M. Chessa, F. Solari, P. Bex. Stereoscopic fusion with gaze-contingent blur. ECVP, Bremen, Germany, August 2013.
Case study 4: Natural perception of VR stimuli
• We have developed a low-cost, practical gaze-contingent display inwhich natural images are presented to the observer with dioptric blurand stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes.
• Our system simulates a distribution of retinal blur and depth similarto that experienced in real-world viewing conditions by emmetropicobservers.
G. Maiello, M. Chessa, F. Solari, P. Bex. Simulated disparity and peripheral blur interact during binocular
fusion. Journal of Vision, 14(8)13, July 17, 2014.
• Natural scenes contain multiple sources of depth information: e.g. binoculardisparity, perspective, and blur (i.e. when eyes are focused at a given distance,objects at other distances will be blurred on the retina):
• We examine depth perception in real images (Light field camera,www.lytro.com) with natural variation in perspective, blur and binoculardisparity
• We examine how the time-course of binocular fusion depends on depthcues.
M. Chessa, G. Matafu’, S. Susini, F. Solari (2016) An experimental setup for natural interaction in a collaborative virtual
environment. 13th European Conference on Visual Media Production (CVMP16), 12-13th December 2016, London.
Case study 5: Natural interaction in AR
• Leap motion has (some) problems
• To improve its stability.
• To explore other non-invasive and low-cost solutions (based on Computer Vision) to track the hand of the user.
• Comparison with other (non natural?) solution: is it better natural interaction or stability?
• To improve analysis
• More complex tasks
• More users
• To analyse the effects of feedbacks not only by looking at completion times and reaching points but analysing the overall behaviour of users. (see tomorrow poster at NIVAR17 workshop)