Top Banner
METHODS published: 21 April 2020 doi: 10.3389/feduc.2020.00038 Edited by: Tom Crick, Swansea University, United Kingdom Reviewed by: Manuela Chessa, University of Genoa, Italy Mary Roduta Roberts, University of Alberta, Canada *Correspondence: John F. LaDisa Jr. [email protected] Specialty section: This article was submitted to Digital Education, a section of the journal Frontiers in Education Received: 22 November 2019 Accepted: 26 March 2020 Published: 21 April 2020 Citation: LaDisa JF Jr and Larkee CE (2020) The MARquette Visualization Lab (MARVL): An Immersive Virtual Environment for Research, Teaching and Collaboration. Front. Educ. 5:38. doi: 10.3389/feduc.2020.00038 The MARquette Visualization Lab (MARVL): An Immersive Virtual Environment for Research, Teaching and Collaboration John F. LaDisa Jr. 1,2 * and Christopher E. Larkee 2 1 Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States, 2 Opus College of Engineering, Marquette University, Milwaukee, WI, United States The MARquette Visualization Lab (MARVL) is a large-scale immersive virtual environment for research, teaching, collaboration and outreach at our mid-sized liberal arts university. MARVL consists of multiple display surfaces including an extra wide front wall and floor, and two side walls. This resource includes stereoscopic viewing, motion tracking and space for a large audience. MARVL’s versatile configuration facilitates viewing of content by 30 people, while also projecting on the entire width of the floor. This feature uniquely facilitates comparative or separate content visible simultaneously via “split mode” operation (two 3-sided environments), as well as detailed motion for applications such as gait analysis and performing arts. Since establishing the lab, its members have received numerous queries and requests pertaining to how system attributes and applications were determined, suggesting these and related decisions remain a challenge nearly three decades since the first CAVE was constructed. This paper provides an overview of MARVL including the processes used in identifying a diverse group of cross campus users, understanding their collective vision for potential use, and synthesizing this information to create the resource described above. The subsequent design, qualitative and quantitative approaches to vendor selection, and software decisions are then discussed. Steps implemented for dealing with simulator sickness and latency are presented along with current approaches being implemented for project development with end users. Finally, we present results from the use of MARVL by several end users identified in the early planning stage, and recent upgrades to the system. Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation, student-centered learning INTRODUCTION Research suggests immersive experiences that allow for motion in a realistic environment promote active learning, critical thinking, informed decision making and improved performance (Patel et al., 2006). For example, a diver is more likely to recall specific instruction when it is learned and practiced in water rather than on land (Baddeley, 1993). This was the motivation to establish the Frontiers in Education | www.frontiersin.org 1 April 2020 | Volume 5 | Article 38
15

The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

Oct 13, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 1

METHODSpublished: 21 April 2020

doi: 10.3389/feduc.2020.00038

Edited by:Tom Crick,

Swansea University, United Kingdom

Reviewed by:Manuela Chessa,

University of Genoa, ItalyMary Roduta Roberts,

University of Alberta, Canada

*Correspondence:John F. LaDisa Jr.

[email protected]

Specialty section:This article was submitted to

Digital Education,a section of the journalFrontiers in Education

Received: 22 November 2019Accepted: 26 March 2020

Published: 21 April 2020

Citation:LaDisa JF Jr and Larkee CE

(2020) The MARquette VisualizationLab (MARVL): An Immersive Virtual

Environment for Research, Teachingand Collaboration. Front. Educ. 5:38.

doi: 10.3389/feduc.2020.00038

The MARquette Visualization Lab(MARVL): An Immersive VirtualEnvironment for Research, Teachingand CollaborationJohn F. LaDisa Jr.1,2* and Christopher E. Larkee2

1 Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI,United States, 2 Opus College of Engineering, Marquette University, Milwaukee, WI, United States

The MARquette Visualization Lab (MARVL) is a large-scale immersive virtual environmentfor research, teaching, collaboration and outreach at our mid-sized liberal arts university.MARVL consists of multiple display surfaces including an extra wide front wall andfloor, and two side walls. This resource includes stereoscopic viewing, motion trackingand space for a large audience. MARVL’s versatile configuration facilitates viewingof content by 30 people, while also projecting on the entire width of the floor. Thisfeature uniquely facilitates comparative or separate content visible simultaneously via“split mode” operation (two 3-sided environments), as well as detailed motion forapplications such as gait analysis and performing arts. Since establishing the lab, itsmembers have received numerous queries and requests pertaining to how systemattributes and applications were determined, suggesting these and related decisionsremain a challenge nearly three decades since the first CAVE was constructed. Thispaper provides an overview of MARVL including the processes used in identifying adiverse group of cross campus users, understanding their collective vision for potentialuse, and synthesizing this information to create the resource described above. Thesubsequent design, qualitative and quantitative approaches to vendor selection, andsoftware decisions are then discussed. Steps implemented for dealing with simulatorsickness and latency are presented along with current approaches being implementedfor project development with end users. Finally, we present results from the use ofMARVL by several end users identified in the early planning stage, and recent upgradesto the system.

Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation, student-centeredlearning

INTRODUCTION

Research suggests immersive experiences that allow for motion in a realistic environment promoteactive learning, critical thinking, informed decision making and improved performance (Patel et al.,2006). For example, a diver is more likely to recall specific instruction when it is learned andpracticed in water rather than on land (Baddeley, 1993). This was the motivation to establish the

Frontiers in Education | www.frontiersin.org 1 April 2020 | Volume 5 | Article 38

Page 2: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 2

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

MARquette Visualization Lab (MARVL), a facility designedto be used by interested members of our community to(1) create technologically advantageous visualization content,(2) demonstrate how visualization technology can be used inlearning, research, and industry, and (3) ultimately teach thetheory rooted in this technology.

Since establishing MARVL, its members have receivednumerous requests pertaining to how system attributes andapplications were determined. The allure of immersive systems,especially with a resurgence of virtual and augmented realitydevices, is prompting interest from potential end users acrossdisciplines, some without prior experience of importanthardware and software and considerations. In the current workwe provide an overview of MARVL including the processesused in identifying a diverse group of users, understand theircollective vision for potential use, and synthesize this informationto create a unique resource that differentiates our institutionwith a particularly strong background in education amongimmersive facilities locally. The subsequent design, qualitativeand quantitative approaches to vendor selection, and softwaredecisions are then discussed. We then present lessons learnedduring early operation of our large-scale immersive visualization(IVE) system and results of its use by several end users identifiedin the early planning stages. Finally, we discuss ongoing costsand recent upgrades implemented within MARVL.

MATERIALS AND METHODS

During the planning process, members of the MarquetteUniversity community generally listed in Table 1 were identifiedfrom responses to an email sent to department chairs throughoutthe university. Meetings were then held over several monthswith interested staff and faculty members of all academic ranksregarding their potential use of a visualization facility. Someof these individuals were intrigued but did not have a specificapplication in mind. However, most potential end users sharedextensive visions with specific objectives geared toward researchand teaching, as well as industry collaboration and outreach.Perhaps not surprisingly for our educational institution, severalpotential end users envisioned using the forthcoming facility intheir classes to better help students understand and realize thecomplexity within or systems or scenarios. While discussing thevision of each end user, members of MARVL were particularlycareful to help potential end users, when needed, to identifyunique ways of achieving a proposed vision in a mannerthat takes advantage of stereoscopic viewing and could not beconducted using a desktop computer, large monitor or standardprojection system.

Several potential large-scale IVEs were discussed uponlearning of each end user’s application and intended use.Approaches discussed generally included a projection-basedcylindrical or dome structure, a 4-6 walled CAVE-type (CAVEAutomatic Virtual Environment) system (Plato, 1974; Cruz-Neira et al., 1992), or a large-scale panel-based system withnarrow bezels (Febretti et al., 2013). Table 1 indicates that severalof our end users focused on applications involving rooms as

structures that would be stationary with right angles (e.g., civilengineering, nursing, theater). While a curved system would notpreclude the viewing of such structures, a CAVE intrinsicallylends itself to these applications without inhibiting use by otherapplications. Although exceptional systems have recently beencreated using panels with ultra-small bezels that are attractivefor a number of reasons, our end users were unanimous intheir dislike for this approach. Most of these end users weretoo distracted by the bezels despite their modest dimensions.End users also identified collaboration via a shared visualizationexperience as paramount, which dampened enthusiasm for aseries of tethered head-mounted displays in communicationwith one another. This feedback by potential users of MARVLsuggested that a CAVE-type environment would be beneficial andmost favorable to the greatest number of users. CAVE systemsconsist of between three and six walls of a room onto whicha specific environment is projected and adapted through themovements of one or more users within it. Five vendors capableof providing CAVE-type solutions were contacted regarding theattributes for the MARVL system identified by its potential usersas discussed in greater detail below.

Components of the Visualization SystemVisualization systems generally contain the four components:

(1) Structure, projectors and screens - structural elementssuch as modular framing, vertical and floor projectionsurfaces, glasses with emitters for creating a 3D experience,stereoscopic 3D projectors and cabling

(2) Image generators (i.e., computers) - a series of computerscontaining high-end, but not necessarily specialized,components and synchronization electronics used tocontrol content viewed in the large-scale immersiveenvironment

(3) Visualization software - Commercial or open-sourcesoftware, sometimes specialized for a particularapplication, that facilitates viewing of content instereoscopic 3D

(4) Tracking system - cameras and associated interactiondevices that allow the system to know the users preciseposition in space, and adapt the rendered content beingviewed based on the user’s perspective and actions

Desired Attributes Expressed by EndUsersAs alluded to above, potential users of MARVL from Engineering,Arts and Sciences, Health Sciences and Nursing made it clear thata CAVE-type environment would be beneficial to the greatestnumber of users. Moreover, responses during the planningstage suggested a system of limited size could actually precludeinvestigators with more established visions from using the facility(e.g., performing arts, gait analysis). There are flexible systemsavailable from several vendors that feature a reconfigurable visualenvironment with the ability to move or open screens on the sidewalls of a CAVE. This option can provide a large front displayconfiguration that was desirable to many potential users at ourinstitution. The benefits afforded by this option may be offset by

Frontiers in Education | www.frontiersin.org 2 April 2020 | Volume 5 | Article 38

Page 3: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 3

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

TABLE 1 | Potential end users identified during the planning stages of MARVL.

End user’s department College Application

Biomedical Engineering Engineering Linking neural activity to function and behavior

Construction Engineering and Management Engineering virtual walkthroughs of buildings and research productivity improvement

Civil and Environmental Engineering Engineering 3D imaging of civil infrastructure, naval training, historical sites, etc

Biological Sciences Arts and Sciences co-localization of structure and function

Biomedical Engineering Engineering Next generation gait analysis

Mathematics, Statistics and Computer Science Arts and Sciences Discrete event simulation, assembly, clearance and tolerance stacking

Clinical Laboratory Science Health Sciences Extracting additional data from flow cytometry results using visual analytics

Civil and Environmental Engineering Engineering project scheduling and cost estimation

N/A Nursing Improved nurse training using realistic clinical environments

Biomedical Engineering Engineering Correlating local blood flow alterations with markers of disease

Performing Arts Communication Optimizing stage lighting and a priori review of sets by directors

Strategic Communication Communication Electronic media, design and user experience

Physical Therapy Health Sciences Viewing of medical imaging data

Biological Sciences Arts and Sciences protein structure, electron density maps

Biological Sciences Arts and Sciences Structure and motion of cilia and flagella

The applications in bold have gone on to be implemented in MARVL since its creation.

alignment issues and the chance for failure of mechanical partsinherent in an articulating structure. Anecdotal feedback fromcenters that had employed this approach indicated that changesto the configuration were infrequent, for many of these reasons.Members of the MARVL therefore decided the system wouldconsists of an extra wide front wall and floor, with standard-sized side walls. These attributes were selected for a number ofimportant reasons:

1. An IVE with an extra wide front wall facilitates viewing ofcontent by a large audience, while also projecting on theentire width of the floor. In contrast, a flexible IVE in theopen position only has a portion of the floor projected.

2. An extra wide IVE also permits rendering of multipleenvironments. For example, a comparison between twobuilding attributes could be rendered side by side toevaluate preferences, or a realistic Intensive Care Unit,for example, containing beds for two simulated patientscenarios with a curtain between them could be renderedwith application to nursing education.

3. An extra wide IVE further permits detailed motion withinthe environment for applications such as gait analysisand/or performing arts.

4. In contrast to a standard cubic IVE, there are relativelyfewer CAVEs with an extra wide front wall andfloor (Kageyama and Tomiyama, 2016), thereforedifferentiating the MARVL facility from other IVEs locallyand around the country.

5. An extra wide IVE avoids potential issues associated withkeeping articulating parts aligned.

Proposed Vendor Solutions and OnsiteDemonstrationsAfter contacting representatives from five vendors, facultymembers in the Opus College of Engineering for which theIVE was to be purchased and housed sought bids from two

vendors willing to offer quotes for a system with attributesdiscussed in the previous section (denoted here as Vendor Aand Vendor B to limit commercialism). While every attemptwas made to obtain similar quotes, differences did exist dueto vendor preferences, technical capabilities and componentavailability. Figure 1 provides a schematic illustration of systemsproposed by the two vendors that address their interpretation ofdesign requirements articulated by the collection of end users.Table 2 provides an at-a-glance comparison of the initial quotesfrom each vendor.

Given the similarities in quotes between vendors, eachvendor was asked to offer a demonstration (i.e., demo) atour institution. Potential end users throughout campus wereinvited by email to attend these demos, which were scheduledat equivalent times on back-to-back days. Attendees were notedand a questionnaire was then emailed directly to each potentialuser to obtain his or her impressions from each demo. Carewas taken to keep attributes consistent between vendors duringdemos. Each vendor was provided with electronic files of thesame content for demonstration before arriving to campus.Vendors arrived one day before their demo to setup associatedequipment and troubleshoot potential issues. Demos did notinclude the full systems described in the accompanying quotesfrom each vendor, since each system is custom and can only befabricated once ordered. However, the demos did include theprimary components that impact perceived image quality. Thesecomponents primarily include the projectors and screen materialspecified by the vendor, which were setup in an interior roomwith no windows to control ambient light. The demos from eachvendor therefore used two projectors of their specified model thatwere partially blended on equivalently-sized screens as shown inFigure 2. The screen used during the demo also matched thescreen material specified by both vendors.

In addition to qualitative feedback, contrast and uniformitywas quantified across the projected surface demonstrated byeach vendor. Specifically, a professional photographer employed

Frontiers in Education | www.frontiersin.org 3 April 2020 | Volume 5 | Article 38

Page 4: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 4

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

FIGURE 1 | Schematic illustrations of systems proposed by two vendors that address design requirements articulated by the collection of end users. The figureillustrates the differences in vendor designs by examining the projector arrangement on the left side of the visualization space. The right side of the spaces proposedis a mirror image of those shown on the left side. Both designs use projectors that project a 16:10 widescreen image, but Vendor A initially suggested using moreprojectors and arranging them vertically, which makes the projected area taller but less deep.

by our institution obtained digital images of a checkerboardtest pattern that was displayed and photographed before thestart of each vendor demo. Images were taken after vendorshad acknowledged that they optimized the combination ofscreen and projectors to the best of their ability within theallotted time. The time allotted for setup was consistent betweenvendors. Care was taken to ensure that the exposure settingson the camera were consistent when obtaining photographs.The test image used for the demos is shown in Figure 3 (top)along with the intensity profile generated from a horizontalquery of 8-bit grayscale values through the indicated portionof the image (bottom) using the Plot Profile function withinImageJ1. This represents the ideal (i.e., best case) output fromphotographs of this image as projected during each demo.The photographs obtained from each demo were similarlyanalyzed offline with ImageJ to quantify contrast and uniformitybetween white and black levels across the projected surfacedemonstrated by each vendor.

Image GeneratorsAs mentioned above, image generators used to display contentin an IVE consist of high-end, but not necessarily specialized,

1 https://imagej.nih.gov/ij/

components. Quotations were therefore obtained from twopreferred vendors of our institution. This approach minimizedcosts and additional markup that would be passed alongto our institution if image generators were obtained fromeither vendor. Both vendors accommodated our request tokeep costs down via this approach. There are several waysthe image generators could be configured. The configurationdiscussed in the results section was recommended by technicalstaff within our institution to deliver solid performance whilealso managing cost.

Visualization SoftwareThe software expected to be used within the large-scale MARVLIVE based on end users identified during planning is listedin Table 3, along with the application, associated detailsand approximate cost at the time of system construction.Where possible, open source and trial licenses (coupled withsoftware vendor demos) were to be implemented to keepcosts down and ensure we purchase software solutions thatare most appropriate for a wide range of users. Open sourcesolutions were recommended based on extensive discussionswith leading visualization researchers around the countryand focused on those with a large base of users andavailable documentation.

Frontiers in Education | www.frontiersin.org 4 April 2020 | Volume 5 | Article 38

Page 5: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 5

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

TABLE 2 | At-a-glance comparison of vendor system specifications from their initial quotations.

Items Attribute Vendor A Vendor B Differences

1 Size of viewable surface(width × height × depth)

18′5 1/4′′ × 10′0′′ × 6′3′′ 20′0′′ × 8′0′′ × 8′0′′ Vendor A: 2 feet higher Vendor B:1.5 feet wider and ∼2 feet deeper

2 Pixels of viewable surface(width × height × depth)

3,556 × 1,920 × 1,200 3,000 × 1,200 × 1,200 Vendor A: 556 (19%) more frontwall pixels and 720 (60%) moreheight pixels

3 Footprint (width × height × depth) 40′2 1/4′′ × 12′9 3/4′′ × 17′5 7/8′′ 36′9 3/16′′ × 11′9 1/16′′ × ∼17′57/8′′

Vendor A: footprint is ∼3 feet wider

4 Projector specifications WUXGA 3-chip DLP - 6,300 lumens WUXGA 3-chip DLP - 7,000 lumens Vendor B: 700 lumens brighter,contrast was equivalent

5 Number of projectors 8: 4 front, 2 floor, 1/side 6: 2 front, 2 floor, 1/side Vendor A: 2 extra front projectors

6 Screen Material Stewart Filmscreen AeroView 70 Stewart Filmscreen AeroView 70 none

7 Tracking System 6 camera ART system withcontroller and interaction device

6 camera ART system withcontroller and interaction device

none

8 Standard Warranty Projectors: 3 years parts and labor3rd party equip: 1 year Return toFactory

1 year warranty with parts coverage Vendor A: additional 2 years onprojectors

9 Approx. Cost $670,000 $605,000 Vendor B: $65,000 less

10 Approx. Cost with equal # ofprojectors (i.e., 8)

$670,000 $675,00 ∼$5,000

11 Optional preventative maintenance Customizable upon request Customizable upon request none

Differences in viewable surface size and pixels of viewable surface are a function of the number of projectors specified and their orientation. For example, the system initiallyproposed by Vendor A used four partially-blended projectors in portrait mode on the front wall to increase resolution on the primary viewing surface. The side walls arethen each generated by a single projector in portrait mode to maintain a matching pixel resolution. The system initially proposed by Vendor B used two partially-blendedprojectors in landscape mode on the front wall. The side walls are also each generated by a single projector, but in landscape mode, and a portion of the available pixels inthe depth dimension are not used. Attributes for image generators and software are not listed since these were to be obtained through existing agreements with preferreduniversity vendors.

RESULTS

System SelectionThe test patterns generated during the demos of each vendor asdigitally captured are shown in Figure 4. Quantification locations(top, middle, and bottom) correspond to the lines in Figure 4located at approximately 10, 50, and 90% of the viewable height,respectively, and illustrate the level of uniformity and contrastlevels across the projected surfaces offered by each vendor duringtheir demonstration. The results of this quantification indicatethat Vendor A provided a combination of screen material andblended projection of the test pattern that was superior to thatoffered by Vendor B in the instances tested at our institution.

FIGURE 2 | Specification for vendor demonstrations using key equipmentimpacting image quality.

These benefits of more seamless blending and uniformity alsoextended to content provided by MARVL that was shown duringVendor A’s demo. Feedback from potential users indicated,almost unanimously, that the Vendor A team was more preparedsince they had configured 3D content sent by end users forviewing and were more knowledgeable of the details in theirquoted solution when asked related questions. In response tofeedback from potential end users around the time of thesedemonstrations, Vendor B provided a revised quote for a systemwith resolution similar to that provided by Vendor A. Similarly,end users liked the increased depth of the solution offered byVendor B, which prompted a revised quote from Vendor A thatincluded one additional projector per side of the proposed IVE.

Based on feedback obtained from potential users followingon-site demonstrations by each vendor, the quantitative metricsmentioned above, consideration of important differencesbetween system attributes such as resolution and size, and uponconsideration of system price, Vendor A was contracted withto install the structure, projectors, screens and tracking systemfor MARVL, consistent with the details provided in their revisedquotation. A rendering of the system as envisioned prior toinstallation is shown in Figure 5. The time to functionality uponselecting a vendor and generating purchase orders was 34 days,which included installation and completion of punch-list items.

Specifications of Image Generators andOperationIt was determined that content for use within the large-scaleIVE provided by Vendor A would be driven by hardware

Frontiers in Education | www.frontiersin.org 5 April 2020 | Volume 5 | Article 38

Page 6: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 6

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

FIGURE 3 | A test image for the demos is shown (A) along with the intensity profile generated from a horizontal query of values through the indicated portion of theimage (B). This represents the ideal (i.e., best case) output from photographs of this image as projected during each demo.

TABLE 3 | Software solutions identified for potential users of MARVL.

Application Software Commercial or Open Source Approximate Cost

Viewing molecular structures Visual Molecular Dynamics (VMD) Open Source $7,500*

Viewing virtual toolkit (vtk) data ParaView Open Source $7,500*

Viewing imaging data and finite element results Avizo with xscreen and xskeleton extensions Commercial $18,000

Generating virtual environments and addingtexture and realism

Unity with GetReal3D for Unity Commercial $20,000

Integrating other commonly used 2Dapplications into the 3D visualization system

Conduit Core, NX, ESRI ArcGIS, SolidWorksand GetReal3D for Showcase Cluster

Commercial $39,000

Installation, configuration and training All of the above both $29,000

TOTAL $121,000

*These software packages are open source, but some vendors charged an implementation fee as listed. The costs above were for the first year of operation, after whichtime software solutions were to be re-evaluated based on user needs.

consisting of six image generators. This included a primary imagegenerator (Z820 E5-2670 workstation with 1 TB HDD and 32 GBRAM; Hewlett-Packard, Palo Alto, CA, United States) containinga single graphics card (Quadro K5000; Nvidia Corp., SantaClara, CA, United States), which communicated control to fiveadditional Z820 image generators via a local Ethernet networkisolated from the institutional network. Image generators beyondthe primary node were configured with two Nvidia QuadroM4000 graphics cards and a single Quadro Sync Interface Board.The graphics cards collectively provide 10 output channels, one

for each of MARVL’s ten projectors (Mirage WU7K-M projectorswith Twist; Christie Digital, Cypress, CA, United States). Imagesrendered by the 10 projectors are warped and blended tocover multiple display surfaces including the extra wide frontwall (four projectors) as well as the floor, and two side walls(two projectors each). The result is stereoscopic projectionand enhanced depth cues over a viewable dimension of 18′6′′(front)× 9′3′′ (height)× 9′3′′ (depth). Resolution is∼4 K on thefront wall, with a total system resolution of 15.7 megapixels. Allimage generators were dual-booted, running Xubuntu Linux and

Frontiers in Education | www.frontiersin.org 6 April 2020 | Volume 5 | Article 38

Page 7: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 7

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

FIGURE 4 | Images of the test pattern generated by the systems of each vendor and captured as digital images by our professional institutional photographer (A,B).The horizontal lines represent spatially-equivalent locations where each photograph was analyzed offline to quantify uniformity and contrast between white and blacklevels across the projected surface. The three green and red lines are located at approximately 10, 50, and 90% of the viewable height, respectively (A), and thecolors correspond to those in the quantification below the images. Quantification of these test patterns (B) illustrate uniformity and contrast between white and blacklevels across the projected surfaces offered by each vendor during their demonstration.

Microsoft Windows 7 Professional 64-bit. Interaction within thevirtual environment is afforded by a tracking system consisting of6 ARTTRACK2 cameras and two FlyStick2 wireless interactiondevices (Advanced Realtime Tracking; Weilheim, Germany).

The subsequent initial operation of MARVL’s large-scaleIVE is shown in Figure 6. This figure demonstrates howsynchronization signals propagated at the time, and how theyimpact multiple components within the visualization space(left). Multiple layers of calibration are necessary to align allthe projectors in used in the space (right). Original plans

did not use SLI Mosaic, because it was unstable in previousdriver versions. However, on Windows, SLI Mosaic does handlerotation. On Linux, rotation conflicts with the stereoscopic 3Dsettings, so rotation must be implemented in each application’sconfiguration files.

Software SelectionsThe potential software costs outlined in Table 3 were intractablewith our available budget, particularly when several packagesrequired annual renewal. Fortunately, several other options were

Frontiers in Education | www.frontiersin.org 7 April 2020 | Volume 5 | Article 38

Page 8: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 8

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

FIGURE 5 | Rendering of the immersive virtual environment selected forMARVL as envisioned prior to installation.

gaining prominence around the time our system installation wasbeing completed. Members of MARVL subsequently exploredother cost-effective options that appeared robust and couldprovide the functionality needed by our end users. These optionscentered around Blender (Blender Foundation; Amsterdam,Netherland) and the Unity game engine (Unity Technologies,San Francisco, CA, United States). The combination of theseprograms would become the basis for all projects conductedin MARVL to date. Briefly, Blender is generally used for meshprocessing and to prepare models for immersive visualization.

Within Blender all model objects are set to have a consistentscale and default orientation, and their origin is established ina sensible position near an object’s center of gravity. In somecases, the decimate filter within Blender is used to reduce anobject’s vertex count. After the models are prepared, it is astraightforward procedure to import them into Unity using atypical workflow. An environment is created to house models,the models are positioned in the scene, lighting is established,and complementary features or data are added as needed for aparticular application. MiddleVR (Le Kremlin-Bicêtre, France)was added to the Unity project, providing support for displayingthe virtual scene across the clustered set of image generators,as well as to provide a user movement system via the ARTFlySticks and tracking system. The total cost for this collection ofsoftware packages was approximately $25,000 upon establishingthe MARVL large-scale IVE and software renewals have costapproximately $3,000 annually to date.

It is worth noting that the presentation of software solutionsat the time when we were planning our system was generallyless of a consideration for most vendors. Even some of themost prominent CAVE research papers do not spend muchtime discussing software, which future work from respectedgroups has subsequently published pertaining to specific softwaredeveloped for a given application (Febretti et al., 2013; Nishimotoet al., 2016; Renambot et al., 2016). In contrast we treat Unity,and our leveraging of Blender as part of this process, as astandard solution. This relates back to our facility being withinan educational institution and being able to assist in the contentcreation and presentation process for a variety of applications,rather than developing a particular software solution that is thento be used by end users within a particular discipline.

FIGURE 6 | Alignment and synchronization of MARVL. This illustrates how synchronization signals propagate, and how it impacts multiple components within thevisualization space (A). Multiple layers of calibration are necessary to align all the projectors in used in the space (B). The original plans did not use SLI Mosaic,because it was unstable in previous driver versions. On Windows, SLI Mosaic handles rotation. On Linux, rotation conflicts with the stereoscopic 3D settings, sorotation must be implemented in each application’s configuration files.

Frontiers in Education | www.frontiersin.org 8 April 2020 | Volume 5 | Article 38

Page 9: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 9

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

As mentioned above, we developed a list of plannedapplications, and a list of necessary software to match duringthe planning stage of our large scale IVE. The initial versionof this facility was configured to dual boot between Xubuntuand Windows, in order to provide the greatest amount offlexibility in software. For the first year of operation, we ran mostsimulations in ParaView VR and Blender Game Engine on Linux,and Unity with MiddleVR on Windows. We continued to runexperiments and trial versions of other software, but as we gainedmore development experience, we settled into a more consistentcontent development pipeline of using Blender and Unity fornearly all applications. As new content challenges arrived, suchas a new 3D model format, video playback, or other interactiondevices, in most cases were able to integrate them into Unity inorder to bring them to our large scale IVE.

Project Development Process andDecision Points With End UsersWith hardware and software selections in place to form afunctioning immersive facility, MARVL personnel have settledinto a process for projects and decisions made in conjunctionwith our end users. Although not rigid, MARVL personneltypically ask versions of the following four questions when newprojects have been proposed by potential end users.

(1) How does the application that the experience and contentaddresses benefit from an immersive approach?

(2) What is the purpose of the immersive experience and itsassociated content?

(3) What resources and personnel are available to supportcontent creation and delivery?

(4) What measures will be obtained from the immersiveexperience, and can they be evaluated statistically inpotential support of the added effort spent on immersivecontent creation and delivery.

To date, MARVL personnel have not made the decision aboutwhich projects move forward within our facility. If questions1 and 2 above have tractable answers, and the project has achampion, then it has historically moved forward organicallyby its own motivation. Given our focus on education withinour institution and college, applications favoring educationalobjectives have been a priory. Those projects with definedoutcomes and measures that could result in external funding ormanuscript submission have similarly moved forward frequently,as efforts on such projects have the ability to grow MARVLand its user base. Historically, the only projects that we havestrongly suggested not progress have been those desiring torecreate physical spaces in their current or near current formthat we can reasonably travel to near campus, and content thatwould not have distinct benefits to immersive viewing uponcreation. Currently we do not charge for educational projectssince our content development personnel is partially supportedby the college. In short, we operate as service organization forthe college and university, while also having investigator-drivenresearch goals that are now starting to be realized throughgrants and contracts.

Data CollectionMost projects to date have used existing data. For example,our computational fluid dynamics content discussed below usesconverged simulation results that are viewed in new ways,including comparatively between groups of experiments or withcomplementary data not often viewed when looking at CFDresults using conventional approaches. In most cases, data arenot generated during an immersive viewing session within ourlarge scale IVE. Although the ARTTRACK camera system isregistering the location of the FlyStick within the tracked space,this information is streamed and not stored. When applicationshave required storing of associated data, separate data acquisitionsystems have been brought into the immersive space for thatpurpose and results have been stored either remotely or on adedicated share of our network attached storage (NAS) drive,depending on end user preference. Even the performance andvisual arts work featured below is based on an existing frameworkof materials. MARVL personnel do not necessarily have apreference for the use of data-driven content relative to free 3Dsculpting (for example) that would not be based on data. Thisoutcome has simply been a byproduct of the visions expressedby of our end users to date. The data-driven experiences todate, together with the background of current MARVL personnelin film, animation and graphic design has also organically ledto our focus on a high degree of realism within the contentthat is created.

Simulator SicknessDuring the installation and calibration of MARVL’s large scaleIVE, enabling head-tracking was a major milestone requiredto convincingly immersive users within the space so theywould temporarily forget about the boundaries of the screensand their current location in the room. However, our earlyexperiences using head-tracking with classes of students quicklyindicated that this hallmark of many immersive systems (i.e.,head-tracking) was not well-received by our audience. Whendiscussing this issue with other immersive facilities, we werereassured that issues pertaining to simulator sickness were muchless of a concern with large-scale IVE than with head-mounteddisplays because the users’ vision was not fully dominated bythe display. However, upon opening MARVL to larger audiences,only a few users in the room (i.e., the person being tracked andthose closest to him or her) were experiencing the immersionto the desired degree, while other patrons (i.e., secondary users)had a suboptimal experience for several reasons. Most noticeably,the head motions of the tracked user were visible to the entireaudience, which created a high amount of camera motion. Thiscamera motion was especially pronounced as a result of the subtlemotions that accompanied tracked users standing or speaking.The secondary users experienced stereopsis issues because theirheads were rarely aligned with the stereo axes tracked from theprimary user’s head. If the secondary users looked at the sideprojection screens when the primary user was not, the stereoaxis would be ∼90◦ off. Fortunately, we were able to resolvethese issues by disabling headtracking. Instead of attaching thevirtual cameras to a position read from the tracking system,

Frontiers in Education | www.frontiersin.org 9 April 2020 | Volume 5 | Article 38

Page 10: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 10

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

we chose a position and orientation representative of a seatedheight in the center of the room and locked the virtual camerasto that point. The stereo axis of each screen was aligned to theface normal of each screen, which allowed the audience to see astereo image on all screens, at the expense of a more pronouncedscreen boundary.

Motion to photon latency (Solari et al., 2013) becamea major concept to measure head-mounted virtual realitysystem latency around 2013. Unfortunately, this metric wasnot discussed during the design phase of our facility. It wasassumed that powerful computer hardware and high-qualitycomponents would be enough to avoid issues, but we didnot have a method for predicting system latency until oursystem and facility were fully functional. We did not conducta rigorous timing of the headtracking latency, but there is aslight noticeable lag when using tracked controllers and head-tracking together. Factors that contributed to our latency were 60to 120 Hz rate conversion on the projectors, GPU buffering dueto external synchronization, VRPN-based system complexity,and MiddleVR’s cluster synchronization method. Innovativeoptimizations like asynchronous time warping and instancedrendering were coming to head-mounted displays, but thosetechnologies were difficult to apply to a clustered configurationsuch as that of our large-scale IVE. We were able to makeminor improvements to our latency issue through softwareconfiguration changes, but without head-tracking, we were nolonger obligated to move the camera position for every frame,making the camera position appear to be more stable andstationary, except during deliberate movements. There are also afew design guidelines we now follow in order to reduce eye fatigueand avoid simulation sickness. For example, whenever text or UIelements are used, they are always placed on the convergenceplane. When a speaker is in the immersive space, they stand onthe edges of the front screen, especially if there is a scene utilizingnegative parallax.

Example ContentSome examples of content created and visualized throughcollaboration with the original end users identified during theplanning stage are shown in Figure 7 and discussed in moredetail below. We begin these examples by describing the processesabove implemented for a project aimed at training of nursingstudents using realistic clinical environments.

Augmenting Nurse Training Opportunities UsingRealistic Clinical Environments (Figure 7A)The use of simulation is common in nursing education. Manyinstitutions have dedicated physical areas designed to resemblespecific clinical environments, including applicable equipmentfor nursing students and other healthcare trainees to hone theirskills. Unfortunately, often there is not enough space at a giveninstitution to physically replicate all the clinical or home healthcare environments that students will experience in practice.Moreover, it can be difficult and, in some cases potentially unsafe,to place trainees into a real clinical environment. A large-scaleIVE has the potential to mitigate these space and safety issues with

FIGURE 7 | Example content from end users identified during planning of thevisualization space. Written informed consent was obtained from all individualsin the featured content. Applications include augmenting training experiencesfor nursing students (A), enhanced viewing of civil engineering infrastructure,architecture, and computer aided design models (B), immmersive visualizationof biomedical computational fluid dynamics results (C), viewing of proteinstructure and electron density maps (D), performing arts (E), and visualarts (F).

virtually constructed environments. Figure 7A shows an exampleof a program implemented with this in mind.

Faculty within the College of Nursing at our institutionwere familiar with immersive approaches as a result of thenearby Virtual Environments Group (formerly known as theLiving Environments Lab) (Brennan et al., 2013a). Several facultymembers therefore reached out to MARVL during the planningstage and joined its personnel during visits to other immersivevisualization facilities. As alluded to above, the ultimate goalof our nursing collaboration was to extend the number ofclinical training facilities beyond what was physically possiblewithin the existing simulation lab in the College of Nursing.For example, the existing simulation facility includes roomsmimicking surgical units, but not an emergency room. As afirst step before creating new immersive, virtual spaces fortraining, our collaborators sought to quantify the ability ofnursing students to learn in an immersive facility meant toreplicate an existing clinical environment. Although the creationof such content is contrary to the details mentioned in ourproject development section above (i.e. not to recreate anexisting physical space in close proximity), MARVL personnelagreed it was important to ensure students could transferlearned skills in an immersive environment to a similar levelas they could in the physical environment before extendingthe collaboration to additional clinical environments that werenot physically available. MARVL personnel therefore visited thephysical space (Conover, 2014) to photograph elements to be

Frontiers in Education | www.frontiersin.org 10 April 2020 | Volume 5 | Article 38

Page 11: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 11

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

replicated virtually. MARVL’s visualization technologist workedwith four animation students from a local technical college tocreate 3D models of the environment, using Blender, 3D StudioMax, and Unity. Members of MARVL will frequently invitestudents and occasionally work with animation consultants asneeded in the content creation process, depending on the scaleand objectives of a particular project. Here again the locationof MARVL within an educational setting has led to a tendencyto involve undergraduate and graduate students in research andimmersive experiences whenever possible.

Upon completion of the virtual space, ∼50 Master’s levelnursing students from our institution were randomly assignedto learn nursing skills in a physical clinical environment orMARVL’s IVE (Conover, 2014). The skills taught focused on acutecare assessments, aseptic technique, naso-gastric tube insertion,tracheostomy suctioning, and Foley catheter insertion. Duringan orientation session, students completed a questionnaireregarding comfort and prior exposure to immersive visualizationapproaches including virtual reality. Each week of the coursethereafter, all students met in the physical clinical environmentwhere they received a demonstration of that week’s skill, whichwas then practiced by half of the students in the immersiveversion of the physical clinical environment. Students in boththe physical and immersive environments were given an equalamount of time to practice and perform a repeat demonstrationof the skill that was presented in the combined group teachingsession. Student skill performance in both groups was assessedusing the same performance rubric. At the end of the course,students who trained in the immersive environment also tooktheir final exam in the physical environment to determinewhether these students could transfer their learning from theimmersive environment to reality. Students who trained in theimmersive environment performed at least equal to those of theother group on all skills tests. It is worth noting that MARVL’s enduser nursing collaborators felt that interaction with details withinthe virtual environment would be crucial for the translation ofskills. Therefore, rather than using a haptic approach or virtualreality gloves, we opted to recreate the clinical sights and soundswith a dynamic environment and position physical material thatstudents needed to interact within into the immersive space. Thisunderscores the utility of the extra wide IVE for which this andother applications were designed.

Additional content has subsequently been created for use withnursing students in MARVL using an approach similar to thatdiscussed above. For example, our most recent collaboration usedcontent that was created to immerse students in a simulatedstudy abroad trip to Peru. Photos of the study abroad clinicalspaces the students would experience were used to generatecontent and representative audio was selected from royalty freesources. Students navigated the immersive space and interactedwith a physical Spanish-speaking actor trained in the clinicalexperience prior students had encountered. Pre-test and post-testquestionnaires were used together with a wireless data acquisitionsystem to temporally quantity changes in respiration, heart rate,galvanic skin response and other measures related to anxietyand preparedness during several simulations prior to the studyabroad experience.

3D Viewing of Civil Engineering Infrastructure,Architecture, and Computer Aided Design Models(Figure 7B)Advances in immersive visualization make it possible to conductcareful study of architectural features and civil engineeringinfrastructure, including better understanding of sightlines andbuilding information modeling. Whether the objective is pre-visualization prior to erecting a structure (Figure 7B), orreconstruction of building complexities from the distant past thatare made accessible for the first time for a new generation, suchstudy is made possible by the procedures implemented withinMARVL’s large-scale IVE. The interactivity provided by an IVEoffers the chance to focus attention on the details, decisionsand/or symbolic meaning that may accompany each portion of aproject. The basis of the control system used to navigate withinstructures in MARVL is a three-dimensional optical trackingsystem affording movement in any direction using the FlyStick2as discussed in more detail below.

Correlating Local Blood Flow AlterationsWith Markers of Disease (Figure 7C)Computational fluid dynamics (CFD) is a method of simulatingfluid passing through or around an object using digitalcomputers. This approach is common for several researchers atour institution (Bowman and Park, 2004; Borg, 2005; Borojeniet al., 2017; Ellwein et al., 2017). The use of CFD is a commonway of calculating blood flow patterns within lumens of thebody in order to better understand a particular disease. Thesesimulations can routinely involve millions of elements for whichthe governing equations of fluid flow are iteratively solved tensof thousands of times to represent a single second of physicaltime such as one heartbeat. Despite modern biomedical CFDsimulations producing 4D (i.e., spatial and temporal) results,these results are often viewed at a single point in time, onstandard 2D displays, and rarely incorporate associated data.Figure 7C shows an example of how members of MARVL areusing immersive visualization as an approach to mitigate theseissues and extract more information from CFD results (Quamet al., 2015) by combining them with available complementarydata related to a given application.

Protein Structure, Electron Density Maps(Figure 7D)During the planning stages of our facility, the end user for thisapplication recounted how he was already using 3D visualizationand analysis of protein structure in his publications and classes,but that the implementation of such structures was mostlythrough 2D and prerendered images using desktop monitors.The end user sought to make better use of the 3D modelsby presenting them in an immersive and interactive way toassist students in understanding complex 3D structures. Thisapproach is common in immersive visualization and virtualreality. The end user’s prior workflow relied on the open sourceprogram, PyMOL2, to convert the protein data bank files into

2 https://pymol.org/2/

Frontiers in Education | www.frontiersin.org 11 April 2020 | Volume 5 | Article 38

Page 12: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 12

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

3D models. PyMOL’s options for exporting its generated mesheswere limited at the time of implementation, so MARVL providedpersonnel support to recreate the models using Visual MolecularDynamics (VMD)3 as an alternative. Upon optimizing the visualrepresentation, the end user worked with MARVL personnel toimport mesh data into Blender, and then Unity. The functionalitywithin Unity was programed to display a series of structures ina linear sequence like an interactive 3D slideshow, as well asdisplay captions, and provide navigation of the space aroundthe structure, but now scaled up to room-size within MARVL’slarge-scale IVE (Figure 7D).

Performing Arts (Figure 7E)Our collaboration in this area began to yield a more immersiveway of visualizing lighting and stage design in hopes of limitingedits following physical construction of sets. The collaborationhas also focused on dynamic evolution of sets with a focus onsmall-scale theater, but with a larger audience than could beaccomplished with one or more tethered head-mounted displays.As its first production, MARVL worked with the Departmentof Digital Media and Performing Arts with the DiederichCollege of Communication at our institution to present The ZooStory (Figure 7E). This Edward Albee play about two men inCentral Park ran for 6 shows and sold over 200 tickets. Thedirector’s vision called for dynamically changing the projectedset to coincide with character evolution. This has fostered newways of achieving digital excellence for productions in theregion using an innovative approach to set design that uniquelyengages actors and audiences. The Zoo Story was not offeredin stereoscopic 3D, but each of the subsequent performancesin MARVL included stereoscopic backgrounds with live andvirtual actors as most recently portrayed in William Shakespeare’sMacbeth (Hauer, 2017).

Visual Arts (Figure 7F)Our institution is fortunate to have a dedicated museum oncampus. The Haggerty Museum of Art opened in late 1984 as ateaching facility. The goals of the Haggerty Museum of Art are toenhance the undergraduate educational experience by engagingstudents in various disciplines to think about the world andtheir subject matter through the lenses of the visual arts. Withthis in mind, MARVL has transformed work from the HaggertyMuseum of Art permanent collection to be experienced in newways, such as recreating large pieces within era-appropriaterepresentations, and when important pieces may be on loanfrom the museum. For example, Salvador Dali’s Madonna of PortLligat comes to life in 3D as an interactive piece with togglableannotations about its history and content. Similarly, a 100-foot-long mural painted by Keith Haring for the construction site ofthe HMA can also be viewed, in situ, as it was in 1983 (Figure 7F).These versions allow for accessibility and for minute detailsobscured in a typical installation to be clearly seen.

3 https://www.ks.uiuc.edu/Research/vmd/

DISCUSSION

The MARquette Visualization Lab has become a valuable campusresource through its first few years of operation. Since itscreation, its members have received numerous queries andrequests pertaining to how system attributes and applicationswere selected. Hence, the goal of the current work was toprovide an overview of the process used in creating MARVL,including those used in identifying end users, understanding theirpotential applications, and synthesizing this information into itssubsequent design and operation. We described our qualitativeand quantitative approaches to vendor selection along with initialand current software decisions. While companies do offer out-of-the-box turnkey solutions, such systems did not meet the diverseneeds and variety of applications expressed by our potential endusers. Despite the custom setup of our system discussed above,it was (and continues to be) imperative for us to have a set ofprocesses in place that are general enough for most applicationsthat present. It is important to note that the approaches usedto gather input from potential end users, decide on a CAVE-type IVE, and assist in vendor selection were conducted withfrequent feedback and transparency at our institution. While theprocesses described seems to have worked well at our institution,it is reasonable to surmise that other institutions may want toconsider different approaches in order to best meet the needs oftheir end users and overall objectives.

Development of Subsequent ResourcesWith the development of MARVL’s large-scale IVE came the needfor additional space and resources to be used in the developmentand testing of content. MARVL’s Content Development Lounge(CDL) was established in a room adjacent to the large-scaleIVE (Figure 8). The CDL is accessible through a set of doubledoors, which also permits transport of larger equipment intothe IVE as needed. In contrast to a typical classroom or lab,the CDL was designed to be an inviting place for potentialcontributors to create and share content, hold meetings forongoing or new projects, and serve as a recording and debriefingsite for experiences held in the adjacent large-scale IVE. The CDLincludes spacious leather seating, programmable indirect lightingand ergonomic pods with local task lighting. There are severalpass-through gang boxes with removable wall plates between thelarge-scale IVE and CDL to permit communication between thetwo locations. The CDL contains high-end workstations with3D monitors and a smaller-scale display system with the samestereoscopic viewing and tracking technology included in thelarge-scale IVE.

Consistent with a theme of transparency and fosteringcollaboration that is apparent throughout Engineering Hallwhere MARVL is located, the entrance of the CDL contains aholographic rear projection system that allows viewers to lookat, and through, the screen. The Holo Screen (Da-Lite; Warsaw,IN, United States) displays digital signage of scheduled eventsand content being featured in current initiatives. The HoloScreen is coupled with a 3D ready projector and emitter thatpermit seamless viewing of content among all MARVL’s displaysurfaces using a single type of stereoscopic glasses during featured

Frontiers in Education | www.frontiersin.org 12 April 2020 | Volume 5 | Article 38

Page 13: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 13

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

FIGURE 8 | MARVL’s Content Development Lounge (CDL) was established in a room adjacent to the large-scale IVE. The interior of the CDL (A) contains high-endworkstations, state-of-the-art display systems and tracking equipment for use in developing digital content, which can also be viewed from its exterior (B).

FIGURE 9 | Schematic illustration of the current hardware setup in MARVL’s large-scale immersive environment.

exhibits and events. In theory, these tiered resources for usein developing immersive content (desktop - > single projectorsystems - > large-scale IVE) are designed to minimize cost andoptimize the use of MARVL’s key resources.

A NAS device is used to share project files and resourcesamong all lab users. Although the institution provides a sharedserver for this purpose, MARVL required our own file server dueto the expected storage and bandwidth requirements. Typically,executable programs are stored on the NAS, and all imagegenerators launch the program simultaneously when the contentis loaded. This is referred to by MiddleVR as the server startinga simulation. However, we noticed a significant reduction inlaunch times after we mirrored the shared folder to eachcomputer’s local SSD drive, instead of loading the programthrough the network. This mirroring is done automaticallythrough an rsync script.

When MARVL opened, the initial NAS device was a DroboB800FS, but this unit was recently replaced with a SynologyRS12919+. The upgrade increased the total available storage from18 to 62 TB, but the primary motivator for the upgrade wasa need for increased network transfer bandwidth. Both devicesused a RAID 6 system to prevent data loss from mechanical drivefailures, but a series of USB drives also serve as an offline mirror

backup. The backup is run manually, using the Hyper Backupsoftware running directly on the Synology server.

LimitationsOne early discussion among end users pertaining to thearrangement of MARVL concerned the use of display surfaceson the floor vs. ceiling. The vision for MARVL involves itsuse as a differentiating factor in educational experiences andextramural grant applications. With the presence of a 6-sidedIVE nearby (Brennan et al., 2013b) and input from our endusers, it was determined that a fully immersive (i.e., 6-sided)system would not be pursued. End users also expressed apreference for projecting on the floor rather than ceiling.However, there were some limitations to overcome with thisdecision. When walking into a physical structure in real life,most individuals will direct their gaze upward to examine thespace. It was therefore important to include this experience.Taking architecture (Figure 7B) as an example, the absenceof a projected ceiling within MARVL required implementingadditional functionality into its interactivity tool in order toappreciate the higher portions of structures and elements,and to simulate a patron’s gaze from the lower locations.A deliberate choice was therefore made not to implement a

Frontiers in Education | www.frontiersin.org 13 April 2020 | Volume 5 | Article 38

Page 14: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 14

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

collision system so that the virtual camera used in MARVL wouldbe completely uninhibited. This decision facilitates explorationanywhere within created or reconstructed content, includingbelow virtual floors and through walls. The movement of a virtualcamera within structural environments is therefore controlledby a script moving the view from a conventional horizontalposition to a vertical one directed toward the top of a structureby rotating the camera upward. While in this rotated view,movements for further exploration of the structure are stillenabled. More specifically, the current implementation usedwith civil engineering, architecture and related structures withinMARVL simply uses a button press to toggle between forward,upward or downward facing gazes. Additional camera controlimplemented into the interaction device works to provide endusers with control of the virtual camera’s height. For example,the thumb control on the FlyStick2 interactivity tool can betapped in the up or down directions to instantly transportpatrons to the various levels of the structure. This representsone approach that worked well for our facility, and others arelikely available.

The MARquette Visualization Lab is spatially restricted toour campus in downtown Milwaukee, Wisconsin. In some cases,this created an impediment to collaborations. For example,clinicians interested in viewing biomedical CFD results at nearbyhospitals and clinics often do not have the time to frequentlytravel ∼5 miles to view patient-specific results. Members ofMARVL have therefore started to use head-mounted displaysto remotely deliver content created for MARVL’s large-scaleIVE. Specifically, members of MARVL now have experiencedeveloping exceptional content for the Oculus Rift, Oculus Quest,Samsung GearVR and Microsoft HoloLens, among others.

Recent Updates, Expenses and CurrentUsesAt launch, our intent was to support as many software packagesas possible, therefore the system was configured to dual bootbetween Windows 7 and Linux (Xubuntu 12.04). ParaView wasthe first program to run in the IVE, which required customlauncher scripts written in Bash, and a customized build ofParaView. After several experiments with other software, wefound the most success with the combination of Unity 4 andMiddleVR. With only a few exceptions, most MARVL projectsare now built on Windows 10, MiddleVR 1.7, and Unity 2018.4.

Due to the wide shape of our installation, some users desiredto use the IVE as a large format display, but due to the clusterednature of the system, we could not use pre-existing softwarewithout unreliable workflows such as high-bandwidth VNC feedsor OpenGL redirection techniques. Therefore, we developedseveral projects that use desktop-style functionality such as webbrowsing, video playback, and presentations. These applicationswere utilitarian, but never fully showcased by MARVL becausethey only use a subset of its features. For example, theseapplications use the IVE’s stereoscopic features and its highresolution, but do not necessarily emphasize immersion orfeature sets of the more established desktop programs they

emulated. Hence, when the opportunity to upgrade the imagegenerators arrived, we opted to change the system architecturefrom a 6 node cluster to a single, more powerful image generatorthat could accommodate both the immersive experiences of alarge-scale IVE, but also improvised experiences with standarddesktop software. Our large-scale IVE now uses four nVidiaQuadro P4000s with 8GB of VRAM each, powering all 10projectors from a single workstation. An nVidia Quadro SyncII card is required to synchronize the GPUs with each otherand the tracking system. The CPU configuration is two IntelXeon Gold 6134s, with 8 cores and 16 threads running at3.2 gHz each, with 96GB of DDR4 RAM. The dual CPU optionwas chosen not for performance reasons, but because a secondphysical CPU doubles the amount of PCIe bandwidth to theGPUs, which is a common bottleneck in multi GPU setups. Thiscomputer upgrade was approximately $17,000. Other hardwareupgrade costs to date included are projector lamp replacements($9,000) and an onsite service call for a heating issue for theceiling mounted projectors that display content for the floor($5,000). A schematic illustration of the current hardware setupin MARVL’s large-scale immersive environment is shown inFigure 9.

The MARquette Visualization Lab’s user base continuesto grow. In addition to the original end users discussedin detail above, more recent applications continue toinclude interactive engineering class content aimed atbetter understanding complex principles and a focus onmore efficient scientific data visualization through thecombined use of data reduction and accentuation tools tostudy and communicate the most important features inscientific results. Several of the initial application areas havealso continued to create content for derivative immersiveexperiences, such as the preparation of nursing students forstudy abroad experiences discussed above, and five additionaltheatrical performances.

In summary, the approach employed here has set the stagefor MARVL to be an important resource at our institution.Nearly all of the end users’ applications uncovered duringplanning stages of the facility (Table 1) have since beenimplemented. Careful selection of the workflow and processesimplemented to create this resource has therefore resulted incross-functionality with current head-mounted displays andlimited the expenses incurred through enhancements to date.We are optimistic, based on interest in MARVL to date, that atleast a portion of the current information will be useful for otherinstitutions who are also considering developing an immersivevisualization facility.

DATA AVAILABILITY STATEMENT

The datasets generated for this study will not be madepublicly available to avoid commercialism and to maintainthe confidentiality of vendors evaluated as part of the currentwork. Requests to access the datasets should be directed to thecorresponding author.

Frontiers in Education | www.frontiersin.org 14 April 2020 | Volume 5 | Article 38

Page 15: The MARquette Visualization Lab (MARVL): An Immersive Virtual … · 2020. 5. 19. · Keywords: immersive visualization, virtual reality, augmented reality, mixed reality, simulation,

feduc-05-00038 April 19, 2020 Time: 8:50 # 15

LaDisa and Larkee Establishing an IVE for Research, Teaching and Collaboration

AUTHOR CONTRIBUTIONS

Both authors contributed conception and design of the study.JL wrote the first draft of the manuscript. Both authors wrotesections of the manuscript, contributed to manuscript revision,read, and approved the submitted version.

FUNDING

Creation of MARVL was supported by the MarquetteUniversity Opus College of Engineering. Support for thebiomedical CFD results shown was provided by NIH grant

R15HL096096 and American Heart Association Grant-In-Aidaward 15GRNT25700042.

ACKNOWLEDGMENTS

The authors thank collaborators whose applicationsare featured in the current work including KerryKosmoski-Goepfert Ph.D RN, Roschelle Manigold RNMSN CHSE, Chester Loeffler-Bell, Martin St. MauricePh.D, and Lynne Shumow. The authors also thank ErikHendrikson, Matt Derosier, and Brad Bonczkiewicz for theirtechnical assistance.

REFERENCESBaddeley, A. (1993). Your Memory: A Users Guide. New York, NY: Avery.Borg, J. P. (2005). “Numerical simulation of pressure drop through a

rotating plenum fan,” in ASME 2005 Power Conference, Chicago: IL,95–100.

Borojeni, A. A. T., Frank-Ito, D. O., Kimbell, J. S., Rhee, J. S., and Garcia,G. J. M. (2017). Creation of an idealized nasopharynx geometry for accuratecomputational fluid dynamics simulations of nasal airflow in patient-specificmodels lacking the nasopharynx anatomy. Int. J. Numer Method Biomed. Eng.33:10.1002/cnm.2825. doi: 10.1002/cnm.2825

Bowman, A. J., and Park, H. (2004). “Cfd study on laminar flow pressure drop andheat transfer characteristics in toroidal and spiral coil system,” in ASME 2004International Mechanical Engineering Congress and Exposition, Anaheim, CA,11–19.

Brennan, P. F., Arnott Smith, C., Ponto, K., Radwin, R., and Kreutz, K. (2013a).Envisioning the future of home care: applications of immersive virtual reality.Stud Health Technol. Inform. 192, 599–602.

Brennan, P. F., Nicolalde, F. D., Ponto, K., Kinneberg, M., Freese, V., and Paz, D.(2013b). Cultivating imagination: development and pilot test of a therapeuticuse of an immersive virtual reality cave. AMIA Annu. Symp. Proc. 2013,135–144.

Conover, E. (2014). 3d Technology Adds New Dimension to Marquette UniversityTeaching. Milwaukee, WI: Milwaukee Journal Sentinel.

Cruz-Neira, C., Sandin, D. J., DeFanti, T. A., Kenyon, R. V., and Hart, J. C. (1992).The cave: audio visual experience automatic virtual environment. Commun.ACM 35, 64–72. doi: 10.1145/129888.129892

Ellwein, L., Samyn, M. M., Danduran, M., Schindler-Ivens, S., Liebham, S.,and LaDisa, J. F. Jr. (2017). Toward translating near-infrared spectroscopyoxygen saturation data for the non-invasive prediction of spatial and temporalhemodynamics during exercise. Biomech Model Mechanobiol. 16, 75–96. doi:10.1007/s10237-016-0803-4

Febretti, A., Nishimoto, A., Thigpen, T., Talandis, J., Long, L., Pirtle, J. D., et al.(2013). Cave2: a hybrid reality environment for immersive simulation andinformation analysis. Eng. Real. Virtual Real. 2013:8649.

Hauer, S. (2017). Marquette 3d lab adds virtual ghosts and witches to ‘macbeth’.Milwaukee J. Sentinel Published April 7, 2017.

Kageyama, A., and Tomiyama, A. (2016). Visualization framework for cavevirtual reality systems. Int. J. Model Simul. Sci. 7:1643001. doi: 10.1142/s1793962316430017

Nishimoto, A., Tsoupikova, D., Rettberg, S., and Coover, R. (2016). “From cave2tmto mobile: adaptation of hearts and minds virtual reality project interaction,”in Human-Computer Interaction. Interaction Platforms and Techniques: 18thInternational Conference, HCI International 2016, Toronto, ON, 400–411. doi:10.1007/978-3-319-39516-6_38

Patel, K., Bailenson, J. N., Hack-Jung, S., Diankov, R., and Bajcsy, R. (2006). “Theeffects of fully immersive virtual reality on the learning of physical tasks,” in The9th Annual International Workshop on Presence, New York, NY.

Plato, (1974). Allegory of the Cave. Republic. Harmondsworth: Penguin, 240–248.Quam, D. J., Gundert, T. J., Ellwein, L., Larkee, C. E., Hayden, P., Migrino,

R. Q., et al. (2015). Immersive visualization for enhanced computational fluiddynamics analysis. J. Biomech. Eng. 137, 0310041–03100412. doi: 10.1115/1.4029017

Renambot, L., Marrinan, T., Aurisano, J., Nishimoto, A., Mateevitsi, V., Bharadwaj,K., et al. (2016). Sage2: a collaboration portal for scalable resolution displays.Future Gener. Comp. Syst. 54, 296–305. doi: 10.1016/j.future.2015.05.014

Solari, F., Chessa, M., Garibotti, M., and Sabatini, S. P. (2013). Natural perceptionin dynamic stereoscopic augmented reality environments. Display 34, 142–152.doi: 10.1016/j.displa.2012.08.001

Conflict of Interest: The authors declare that the research was conducted in theabsence of any commercial or financial relationships that could be construed as apotential conflict of interest.

Copyright © 2020 LaDisa and Larkee. This is an open-access article distributedunder the terms of the Creative Commons Attribution License (CC BY). The use,distribution or reproduction in other forums is permitted, provided the originalauthor(s) and the copyright owner(s) are credited and that the original publicationin this journal is cited, in accordance with accepted academic practice. No use,distribution or reproduction is permitted which does not comply with these terms.

Frontiers in Education | www.frontiersin.org 15 April 2020 | Volume 5 | Article 38