Virtual Alpine Landscapes and Autonomous Agents · 2010. 1. 10. · Virtual Alpine Landscapes and Autonomous Agents Duncan CAVENS, Eckart LANGE and Willy SCHMID 1 Introduction For
Post on 19-Aug-2020
1 Views
Preview:
Transcript
Virtual Alpine Landscapes and Autonomous Agents
Duncan CAVENS, Eckart LANGE and Willy SCHMID
1 Introduction
For most landscape architects, the concept of computer simulation stops at the creation of a 3D simulation of a design proposal. While visual simulation is important for exploring design options and communicating with a wider audience (SHEPPARD 1989), a wider use of other forms of modeling and simulation to explore the impacts of design interventions could give designers and planners another tool to help accomplish their tasks. One key issue that has hampered the adoption of simulation in small scale planning and design projects is the sheer complexity of the associated issues, particularly with respect to the social impact of proposed interventions.
This paper introduces the project “Planning with Virtual Alpine Landscapes and Autonomous Agents”, which is funded by the Swiss National Foundation program “Habitats and Landscapes of the Alps.” The project is exploring the feasibility of using autonomous agent modelling to evaluate proposed changes to an alpine landscape. The project seeks to use simulated people (agents) who “see” the landscape as surrogates for real people reacting to the proposed future landscapes. This paper describes the overall project approach, and explains how visualisation will be used in the context of the project.
2 Visual Perception Research
Although visual concerns are often considered too vague to be included in a simulation (which implies being able to quantify them), there is a long history of using photographs and visual simulations to quantify the visual quality of the landscape.
The standard technique is to have individuals assess photographs of a particular location, and use their ratings to determine the visual quality of a view or landscape (e.g. SHAFER
AND BRUSH 1977). Many studies (e.g. DUNN 1976, STAMPS 1993, MEITNER AND DANIEL
1997) have demonstrated that evaluations made on the basis of photographs correlate closely with people’s assessments of the places they represent. Studies have investigated the impact of various features (such as water, percentage of vegetation, types of tree shapes) on people’s aesthetic response to a particular scene(e.g. SUMMIT AND SOMMER
1999)
A refinement of using photographs to evaluate landscape quality is to use computer simulations (LANGE 2001). Using computer graphics, subjects are shown a series of computer generated images in which minor elements are altered. The impacts of these changes are measured with respect to the overall degree of realism of the scene. This allows the researcher to more accurately judge the impact of specific elements and/or
D. Cavens, E. Lange and W. Schmid 2
spatial arrangements; for example if a tree is in the foreground, it could potentially increase the visual quality of a scene.
Most of the above research into landscape perception has largely been descriptive: it investigates how people react to a particular landscape or elements thereof, but often falls short of interpreting these reactions and exploring how they can be used to improve the quality of landscape planning and design. While this knowledge is useful for informing design decisions, conducting these kinds of tests with real subjects is time-consuming and is generally not considered outside of research environments. It is not considered practical as a technique for evaluating different design proposals for typical projects.
Our project explores an alternative approach: rather than using real human subjects to evaluate potential design scenarios, computer simulations of individuals are used to evaluate the proposed changes to the landscape. By using data gathered using standard perception testing techniques, it is hoped that these simulated individuals can be calibrated to represent real individuals in the real landscape.
3 Test Site: Schönried, Switzerland
The specific test site is a valley in the Gstaad-Saanenland region of south-western Switzerland. The communities of Schönried and Saanenmöser are at the two ends of the site; their economies are highly tourism dependent. While the primary tourism draw to the area used to be winter skiing, long term climate change is forcing the community to focus its efforts on building up a more diversified tourism economy. This includes capitalizing on its already strong reputation for summer hiking. The landscape is a mixture of pasture and coniferous forests, dominated by Norway Spruce (Picea abies). The test site is characterised by significant topography and is considered ideal for walking and hiking. (see figure 1). The trails are very accessible to a wide range of hiking abilities due to the summer operation of one chair-lift and two gondolas. In the high season, the area is busy with hikers and walkers who easily fill the two main parking lots in Schönried.
A recent study in the area (MÜLLER AND LANDES 2001) identified that the biggest attraction for summer tourists are the area’s scenic qualities. Hiking and walking is the primary recreational activity in the summer months. The focus on views was confirmed by our own study in 2002 (CAVENS AND LANGE 2003), where views and landscape variety were identified as the most important factors that influenced hikers in their choice of hiking routes.
In addition to the community’s desire to diversify its recreational economy, there are landscape policy issues that have the potential to change the desirability of the area for summer tourism. These issues include changes to the pattern of the landscape due to changing agricultural policy, shifts in forestry practices, closing of the gondolas and/or chairlifts, and increased holiday home construction. Any of these changes would have complex repercussions for the tourism industry: future scenarios to test the agent model will be selected from them.
Virtual Alpine Landscapes and Autonomous Agents 3
Figure 1: Typical view of study site (from Horneggli mountain towards Schönried )
4 Modeling Approach
Our project couples realistic 3D computer visualisation with autonomous agent modeling to test alternative landscape scenarios. While the 3D visualisation is useful as a tool to explore the model and confirm its correctness, its primary purpose is to confirm, with real subjects, that the simulation’s assumptions about visual and aesthetic concerns are accurate.
Obviously, there are certain classes of landscape problems that are better suited to this approach than others. The use of simulation in planning has largely been limited to investigating questions that affect large areas, usually at the regional or larger scale. These planning related simulations have either dealt with wildlife or ecology issues (HEHL-LANGE
2001) or large scale urban systems problems (such as traffic congestion (NAGEL 1998) or the optimal allocation of land uses (WEGENER AND FÜRST 1999)). At these scales visual concerns have little impact on the phenomena being modelled, and are safely disregarded. However, other than overall regional strategies, most planning decisions are made at a much finer scale than the available simulations. These decisions, such as those to allow an increased density of housing around a village centre, have a direct impact on local residents that is not easily captured by city-wide or regional models. At these small scales (such as the sub-watershed or village), visual elements and the overall visual quality of the proposed planning intervention are extremely important. This is particularly true with areas
D. Cavens, E. Lange and W. Schmid 4
dependent on tourism, which are often promoted based on their scenic qualities. The impact of changes is often cumulative, as the overall impact of a series of design projects is larger than the individual impact.
For our particular site, the central question is to determine what impact changes to the landscape have on the ‘enjoyment’ of summer hikers, and will those changes have enough of an impact for them to either change their typical route, or, decide not to return to the same place next year? This is a very difficult question to answer, because changes to the landscape (unless major) have the possibility of affecting individuals in very different ways.
Autonomous agent modelling allows one to represent each resident or visitor in the simulation, rather than be forced to aggregate their preferences into a general model. Each ‘agent’ is given a set of preferences and goals and is set loose to explore the environment. As they move through the environment, they learn from their experiences. Eventually, if the parameters are specified correctly, the goal is to have a simulation which reflects the current observed situation (i.e. agents move in the same places and at the same times as in the real world.). Once this has been achieved, changes to the environment can be introduced, and the agents’ reactions to the model studied.
Others have applied autonomous agents to small scale landscape planning concerns (GIMBLETT et al. 2002). The research so far has been confined to two dimensions. While these simulations have been applied to recreation areas, their interest has primarily been on how users interact with each other, rather than on how people interact with the landscape itself. While these ‘traffic’ issues are important in the busiest of recreation sites or in wilderness areas where encounters with other hikers are not typically desired, they are not the critical concern in the majority of recreation communities where retaining and attracting more tourists is often more important than managing those that already visit.
Simulating individuals allows one to create a highly nuanced model of visual responses, with each having a different tolerance for different kinds of visual stimuli. However, simulating the activities and visual preferences of thousands of individuals is a complex task, and requires both a large amount of source data and ample available computing resources. We are relying on the agent modelling framework developed by our project partners at ETH’s Institute for Scientific Computing (Prof. Dr. Kai Nagel). Their system, based on distributed computing concepts, provides us with the ability to simulate millions of agent-days quickly, allowing one to tweak model parameters continuously.
Real People
/ Real World
Real People
/ Virtual World
Virtual People
/ Virtual World
Calibration
Virtual Alpine Landscapes and Autonomous Agents 5
Figure 2: Diagram of the overall project structure.
In order to produce realistic results, one requires a good set of base data to drive the agent simulation. The agent simulation relies on a GIS-derived spatial database, representing detailed spatial information such as the location of fences, individual trees, types of ground cover and the condition of trails and roads. The agents are ‘aware’ of these objects as they move through the landscape, and they use them to decide where to go, as well as inputs for their evaluation functions which determine how satisfied each agent is with their chosen route.
A detailed spatial model is useless for agent modeling if one is unable to calibrate the agents’ reaction to it. In order to accurately calibrate and validate our model, we are collecting data from respondents in two tracks: using formal surveys and informal observations out in the site itself; and will be using a virtual environment to track subjects’ response to the proposed changes to the virtual world. As illustrated in figure 2, data collected from the ‘real people in the real world’ and ‘real people in the virtual world’ will be used to confirm the results from the autonomous agent simulation (‘virtual people in the virtual world’).
Data regarding ‘real people in the real world’ is being collected through interviews with hikers on site, using counts of hikers using the various public transit facilities (trains, gondolas, chair lifts), and based on structured observation of usage patterns in the landscape. This data will provide an understanding of the current situation. By interpreting this data with a detailed GIS analysis of the site itself, we expect to be able to develop a model that reflects the visual preferences of the site visitors.
It is not necessarily easy to extrapolate from this understanding of the current situation to proposed landscape scenarios: in order to ensure that our agent simulation accurately reflects hikers sensitivities to change, subjects will be asked to evaluate these changes in a virtual environment (see BISHOP ET AL. 2001 for a discussion of the strengths and weaknesses of this approach). Subjects will be invited to explore a computer generated representation of the study site, both in its current state and with the proposed landscape scenarios applied. Their path choices will be compared and evaluated, and used to further refine the autonomous agent model.
5 Use of Visualisation
In order to test real people in a virtual world, the virtual world needs to first be created. As Bishop points out, due to limitations in current rendering technology (and available resources for modeling), it is not yet feasible for a virtual world to act as a complete surrogate for reality. However, as other research has demonstrated (LANGE 2001, DANIEL
AND MEITNER 2001), realistic renderings using current technologies are sufficient for subjects to make evaluations based on visual criteria. While these studies have been done on static images rendered from a 3D model, it is our belief that they also apply to virtual environments where users can move freely, as long as a minimum standard of interactivity and degree of realism is achieved.
D. Cavens, E. Lange and W. Schmid 6
The 3D rendering system serves two purposes in the project: to provide an environment for the testing of real people and to allow interaction with the agent based-simulation for testing and evaluation purposes.
Both purposes have similar requirements for the rendering system: being able to move dynamically though the landscape. The system has to be able to track movements (in the case of testing human subjects), and display the activities of computer generated agents. The following criteria were established for selecting a visualisation system and renderer:
• provide real-time rendering capability • provide highly realistic rendering • scale to accommodate future hardware developments and constantly-improving
rendering techniques • multiple platforms (from Windows laptops to SGI Onyx class machines) • to ensure consistency between models, be able to use the exact same spatial database
as the agent simulation
Figure 3: Early screenshot of 3D rendering application, showing agents depicted as grey boxes in the middle ground. Basic elements are in place, but not a lot of effort has been given to detailed modeling.
Virtual Alpine Landscapes and Autonomous Agents 7
After extensive evaluation, it was decided to develop a custom viewer based on the Openscenegraph library and to develop a custom spatial database format based on the extensible markup language (XML).
The Openscenegraph is a high level scene-graph library, similar in functionality to SGI’s OpenGL Performer, but available for free under the terms of the LGPL (GNU Library General Public License.) It provides a modern high performance real-time viewer that runs on multiple platforms. Plugins exist for importing data from most standard 3D formats. It is also extremely flexible and, being open source, encourages users to customize it extensively.
While the viewing library provides support for many current techniques for real-time rendering (such as view frustrum culling and level of detail management (MÖLLER AND
HAINES 1999)), without data formatted to take advantage of these techniques, the rendering engine is unable to use them. At the same time, the large size of the study area (over 70km2) and the visual realism that we wish to achieve make building the model by hand impractical. As a result of these two factors, considerable efforts are being made to automate the creation of the visual model. A series of software modules are being created that transform GIS data (such as road/path locations, forest types and building locations) into optimised 3D data structures. 3D elements are either directly created by the software modules or linked in from other modeling packages (such as 3D Studio.)
As an intermediate step, the data is exported from the GIS into an XML format, which is the format that both the autonomous agent system and the visualisation system use to build the spatial database. This allows one to quickly introduce new elements into the visual scene (by modifying the underlying GIS) and ensures that any updates will be reflected in both simulations.
6 Next Steps and Conclusion
The basic elements of the system are now in place: the agent simulation is functional, and communicates with the spatial database to understand its environment. At present, the agents only react to objects in their immediate vicinity (~ 5m), and only from the perspective of avoiding collisions with them. The visualisation system works, and displays a highly simplified version of the environment (see figure 3).
The next stages of the project are considerably more difficult: creating the algorithms and model parameters for the simulation. This is made particularly difficult because the route choices depend on a number of inter-related variables such as distance and difficulty in addition to the already difficult visual quality. Increasing detail will be added to the visual model as the agent model is expanded to interpret the corresponding spatial detail.
D. Cavens, E. Lange and W. Schmid 8
7 Bibliography
Bishop, I. D., Ye, W.-S., & KARADAGLIS, C. (2001): Experiential approaches to
perception response in virtual worlds. Landscape and Urban Planning, 54,119-127. Dunn, M. C. (1976): Landscape with Photographs: Testing the Preference Approach to
Landscape Evaluation. Journal of Environmental Management 4, 15-26. Cavens, D. and Lange, E. (2003): Hiking in Real and Virtual Worlds. in Koll-
Schretzenmayr et al. (Eds.) The Real and the Virtual World of Planning. (publication forthcoming)
Gimblett, R. (Editor) (2002): Integrating Geographic Information Systems and Agent-
Based Modeling Techniques. Oxford University Press, Oxford. Hehl-Lange, S. (2001): Structural elements of the visual landscape and their ecological
functions. Landscape and Urban Planning, Vol. 54, 105-114. Lange, E. (2001): The limits of realism: perceptions of virtual landscapes. Landscape and
Urban Planning 54, 163-182. Meitner, M., & Daniel, T. (1997). The effects of animation and interactivity on the validity
of human responses to forest data visualizations. In B. Orland (Ed.) Proceedings of Data Visualization '97, St. Louis, MO.
Möller, T. and Haines, E. (1999): Real-Time Rendering. AK Peters, Natick, Massachusetts.
Müller, H. and Landes, A. (2001): Standortbestimmung Destination Gstaad-Saanenland :Gästebefragung Sommer 2001 Schlussbericht. Bern: Forschungsinstitut für Freizeit und Tourismus (fif) der Universität Bern.
Nagel, K., Beckman, R. J., and Barrett, C. L.(1998): TRANSIMS for transportation
planning. LA- Los Alamos Unclassified Report UR 98-4389. Shafer, E.L. and Brush R.O. (1977): How to measure preferences for photographs of
natural landscapes, Landscape Planning, 4,237-256. Sheppard, S.R. J. (1989): Visual Simulation: a User’s Guide for Architects, Engineers, and
Planners. Van Nostrand Reinhold, New York. Stamps, A. E. (1993): Simulation Effects on Environmental Preference. Journal of
Environmental Management 38, 115-132. Summit, J. and Sommer, R. (1999): Further Studies of Preferred Tree Shapes.
Environment and Behavious 31, 550-576. Wegener, M. and Fürst, F. (1999): Land-use transport interaction: State-of-the-art.
Berichte, No. 46, Institut für Raumplanung, University of Dortmund.
Mixed Realities: Improving the Planning Process by Using
Augmented and Virtual Reality
Joachim KIEFERLE and Uwe WÖSSNER
1 Introduction
Over the past years computer based spatial representations have evolved rapidly from non-real time to real time. Whereas in former times the results of rendering computations could mainly be reviewed with a delay of minutes or even hours, today the planners can interact directly with their virtual three dimensional models like parks, buildings or urban design. This supports planners like landscape architects, architects or engineers in computing, evaluating and communicating their designs more easily. By anticipating the visual representation of a planned project, it enables even strangers in a subject to judge a project more easily and furthermore enables the specialists to optimize a project.
This process can be supported by showing non-visual properties, by coupling the visual information with interactive online simulation. It is one of the main advantages of the artificial realities, that compared to "real" reality several realities can be layered. By coupling the artificial realities, like augmented reality (AR) with virtual reality (VR), further benefits could be observed. This paper will give an overview over different approaches and experiences of the authors. These approaches keep the architects way of working in mind, that many factors like design, structural engineering or environmental effects have to be understood, valued, weighed up against each other and finally combined into one project.
2 Planning Process
The planning process as a complex process with many chances for misunderstandings (SCHÖNWANDT, 1986) can be described in a simple abstraction as a process of ordering, valuing and communicating. All representations supporting the actors in acquiring the necessary information to find the "best solution" should be used. Some examples are:
• visible information, the visual appearance of the planned project • properties of the project / elements • impacts of the project / elements
Virtual representations provide a chance to not only show the visible, but to show the invisible, even ideas and concepts.
J. Kieferle and U. Wössner 2
Fig. 1: The meaning triangle (SOWA, 2000a; modified)
Semiotics, the science of signs, describes the relation between “concept”, the idea in our mind, “object”, the physically existing and “symbol or language”, the notation representing the idea and the physically existing (see fig. 1). By "shifting up" the level of representation, from real to virtual using a notation of a higher level concept, a clearer basis for communication can be achieved.
In this way virtual realities provides a chance to widen the representation notations and thus the way of communicating as well as reducing possibilities of misunderstanding. Especially combining "real" (visible) and virtual representations at the same time proved to work very well. A good example of virtual representation is simulation.
3 Scientific Online Simulation in Architecture
Online simulation with its immediate feedback assists users in understanding complex relations. By changing and adjusting different parameters and observing their effects, projects can be optimized in a very short time. Two examples shall show some possibilities.
3.1 Example 1: Thermal Simulation
With an online thermal radiation simulation, a complete process chain has been implemented from CAD over the simulation to visualization in a virtual environment. The diagram (fig. 2) shows the utilized components. The simulation (Sunface) communicates with the CAD system (AutoCAD) by means of an OLE/COM interface. The geometry might also be read in from a DXF file. Within the simulation, the polygonal data is further tessellated if necessary to better discretize the dataset for the numerical simulation, which then computes the different radiation intensities in the building or structure, taking into account the time and position on earth, material properties, shadows and more. These results are then transferred to the visualization system COVISE, where it is processed by different modules and finally displayed in the virtual environment. Within the virtual
Mixed Realities: Improving the Planning Process by Using Augmented and Virtual Reality 3
environment, different parts of the geometry can be picked, moved and scaled interactively. After doing modifications, the changed geometry is transferred back to the simulation, which immediately computes a new solution. The results are again presented to the user in the virtual environment so that he can immediately verify his modification. Computation time can vary depending on the complexity of the geometry from one to several seconds. It should not exceed a maximum of approximately one minute as no user would want to wait longer for a result.
Fig. 2: Online simulation with feedback loops to post processing, simulation and CAD
3.2 Example 2: Wind Simulation
Fig 3. Simulation of the current of air in an urban / skyscraper situation
To understand the wind impact of for example buildings, computational fluid dynamics (CFD) are used to calculate the airflow. With StarCD or Fluent, simulations can be
J. Kieferle and U. Wössner 4
calculated and parameters can be changed directly from within a virtual environment. The CFD code of these two common commercial products is tightly integrated into COVISE. Streamlines, colored cutting planes, iso surfaces and other flow visualization methods can be displayed together with architectural models (see fig. 3). By placing streamlines at different positions and observing the effects, an understanding of wind behavior can be achieved.
4 Project Reviews Using Virtual Reality
Traditional representation techniques for architecture are drawings, models and renderings. These are representations that either differ in scale or dimension from the final project, that demand the observers’ imagination to transfer what can be seen to what will be built. Especially drawings, due to the necessary abstraction when representing a three dimensional project on a two dimensional medium are a base for many misunderstandings. By using VR in a CAVE environment (CRUZ-NEIRA ET AL. 1993) (see fig. 4), the project participants can physically stand in a scene and get a feeling of being there. The CAVE, a cubic projection room with a side length of approximately 2.5 meters provides space for discussions with up to seven participants.
Fig. 4: Scheme of a CAVE
Mixed Realities: Improving the Planning Process by Using Augmented and Virtual Reality 5
Compared to presentations with a monitor or a head mounted display, the group discussion aspect is one of the main advantages. It could always be observed, that within minutes after the beginning of a session, the participants focused on the scene content and forgot about the hardware involved. Although the stereoscopic images just fit for the person with the tracked glasses, the impression for the other participants with non tracked glasses is still good.
A six degrees of freedom mouse with three buttons is used as the main input device. It showed itself to be a very flexible input device, as it can be easily handed from person to person to explore a scene.
While interaction methods such as movement known from real world interactions are very intuitive, interaction methods not well known from the real world such as scaling, clipping or flying need some experience. For architecture presentations special interaction methods and procedures had to be implemented such as:
• walk mode: while moving around, users stay on the ground, even on a virtual site or stairs.
• object move: single objects or groups of objects can be moved, rotated and scaled. • color picker: elements color is changed with an rgb-cube interface • material changer: material is changed interactively • exhibition picker: single exhibition elements out of a case can be picked and situated
in the scene
5 Project Reviews Using Augmented Reality
Whereas project reviews with VR are well known, there are not that many projects applying AR (some samples see e.g. http://www.hitl.washington.edu). These project reviews profit from the overlaying of the virtual world over the real world, from Virtual Reality over reality. AR can be used both for model reviews (scaled) as well as for real site (1:1) reviews.
5.1 Interfaces and Tracking
Two basic questions arise from applying AR:
1. What is the appropriate interface for flexible working on-site? 2. How can the positions be tracked, to match the virtual world and reality in orientation and scale?
For the Augmented Reality applications, the authors use two experimental setups:
• A head mounted display, the Cy-Visor with attached PAL-cameras (see fig. 5). • A consumer DV camera connected via firewire and a standard monitor.
J. Kieferle and U. Wössner 6
Fig. 5: Cy-Visor with attached PAL cameras.
Whereas the Cy-Visor is intended mainly for single users, groups can review projects on a monitor. Especially for short distances up to a few meters, the Cy-Visor supports the stereoscopic impression by providing different images for left and right eye.
5.2 Application
Video capture and marker recognition is currently based on ARToolKit (BILLINGHURST, M. AND KATO, H., 1999). Markers positioned in the real world (see fig. 8a) or model (see fig. 8b) provide the necessary visual information to match virtual and real images.
ARToolKit has been seamlessly integrated into the VR system COVER, the renderer of COVISE. Marker recognition is done in the capture processes by ARToolKit and the transformation matrices are then used to either move objects or adjust the viewpoint. To allow easy integration in existing Visualizations and rapid development of new applications, a special AR sensor Node was added to the VRML97 support. It acts similar to traditional Plane or Cylinder Sensors. By using this node, the position and orientation of a marker in space can be used to move and orient a VRML object accordingly. Another special VRML group node can be used to render invisible objects, this allows to model part of the physical objects to serve as occluders for the virtual objects. Otherwise virtual objects might occlude real objects in the video picture.
5.3 Examples
AR as a stand-alone application was successfully used for:
• Overlaying architecture models or real world with scientific simulation results. • Switching through various digital model alternatives (see fig. 6).
Mixed Realities: Improving the Planning Process by Using Augmented and Virtual Reality 7
Fig. 6: Virtual model in real architecture model scale 1:500 - a) model with marker b) variant 1 with marker c) variant 2 occluding marker
As the interaction with a real architecture model is very common to the users, they can interact easily from the very beginning. A difficulty in handling these models is that the markers have to be kept clearly visible to the camera at all times. Another drawback is the moderate resolution of standard video cameras (720 x 576 pixels) and the small field of view of head-mounted displays. Fiducial markers which are used by ARToolKit are easy to handle, they can be printed out in different sizes and can be attached to any object. One issue, though, is the low accuracy, especially in outdoor applications. Filtering methods can be used for object tracking as described in section 6.2 but for viewpoint tracking, filters would impose additional lag which leads to deviation as well. Other vision based tracking systems would have to be used to substantially improve registration accuracy.
Fig. 7: Setup of AR on site
J. Kieferle and U. Wössner 8
Fig. 8: a) Image like recorded from the camera with large visible marker b) Same image in AR overlaid by virtual landscape
6 Coupling Virtual Environments
6.1 Coupling VR with VR
By coupling two or more virtual environments (such as CAVEs), the group collaboration of collocated groups is extended to world wide team collaboration. Collaborative sessions can be initiated through a web interface. It provides different rooms where meetings can be scheduled. A central server (VR-broker, VRB) is assigned to each room which distributes messages between the virtual environments and provides a session management. Video and audio conferencing tools can also be started through the web interface.
To coordinate concurrent interaction in a virtual environment, three different collaboration modes were implemented (WOESSNER, U ET AL. 2002):
• Loose Coupling: All partners can navigate through the virtual world independent of each other, they are represented as avatars in the other’s worlds.
• Tight Coupling: Viewpoint and scale are synchronized between all participants. Whenever one of the collaborators moves the world, it moves the same way at all sites.
• Master/Slave: Same as Tight Coupling mode but only one of the participants can navigate through and interact with the virtual world.
The Loose Coupling mode is especially suited for architectural walk throughs where the partners explore a building or landscape at scale 1:1, whereas the tight coupling modes are good for joint work on a model scale building. Master/Slave mode is best used for presentations because undesired interference from other participants is avoided.
6.2 Coupling AR with VR
Augmented reality techniques can not only be used to overlay virtual over real objects but they can also be used to implement tangible user interfaces for 3D applications (WELLER, P. 1993), (FRITZMAURICE, W. ET AL. 1997). Markers attached to physical interaction
Mixed Realities: Improving the Planning Process by Using Augmented and Virtual Reality 9
devices are tracked, using the same computer vision techniques as in traditional augmented reality. In a research project (in DaimlerChrysler MarkenStudio), several toy cars and further elements are equipped with different markers. A camera is positioned above a physical model which defines the interaction area. The VRML model of the same part of a building was created which links the markers to virtual cars and thus the virtual cars can be positioned by moving the physical cars (see fig. 9). The AR sensor has been extended to support collaborative work which means that in a collaborative session, all instances of a virtual object are moved the same way as in the AR setup. This allows linking up the physical model with an immersive virtual environment like a CAVE through a collaborative session. While the cars or furniture in the physical mockup are moved, the 1:1 scaled situation in the CAVE (see fig. 10) is updated continuously to reflect the changes. So the arrangement can be judged in 1:1 scale.
Fig. 9: AR setup for tangible interfaces to position objects in a virtual environment
J. Kieferle and U. Wössner 10
Fig. 10: Exhibition area at scale 1:1 in the CUBE, a CAVE-like environment.
7 Conclusion and Outlook
It has been proven in several projects that the different approaches work well. However each technology and combination thereof has its own benefits and limitations. Whereas VR as a complete virtual environment allows more unbound representations, AR as a context based representation referring to the real world should be limited to representations fitting into the real world appearance (e.g. in scale and orientation).
The group discussion aspect is best supported in a CAVE-like VR environment. Due to the absence of any distracting factors, the participants focus on the planning topics and immerse into a scene very fast. The immersion can be supported tremendously by using accompanying auditory information.
Depending on the interface and application, the AR approach is more focused on a single person experience. The main effect is that in the real world with all its sensory information the information of planned projects overlaps reality. Combining AR with VR, using AR as tangible interface, gives a real time feedback of a model scaled world in a 1:1 dimension.
Mixed Realities: Improving the Planning Process by Using Augmented and Virtual Reality 11
VR as well as AR are still emergent technologies. Whereas VR is pretty common in productive fields, AR is more in its beginnings and there's still a lot of work to be done in the projection interfaces (like resolution in displays, glasses) as well as the tracking. This will be part of a currently planned research project, which should also evaluate the authors observations.
By representing the future impression of a project and combining it with further information, both technologies can improve the planning process and assist planners and laymen in achieving better design.
References
Behringer, Reinhold; Klinker, Gudrun; Mizell, David (eds.): Augmented Reality: Placing
artificial objects in real scenes. IWAR 1998, San Francisco, 1998. Billinghurst, M. and Kato, H. (1999). Collaborative Mixed Reality. In Proceedings of
International Symposium on Mixed Reality (ISMR '99). Mixed Reality--Merging Real and Virtual Worlds, pp. 261-284.
Cruz-Neira, Carolina; Sandin, Daniel J.; DeFanti, Thomas A. (1993): Surround-Screen
Projection-Based Virtual Reality: The Design and Implementation of the CAVE. Computer Graphics Proceedings, Annual Conference Series.
Fritzmaurice, G.; Buxton, W. (1997): An Empirical Evaluation of Graspable User
Interfaces: Towards Specialized, Space-Multiplexed Input, Proc. CHI97, ACM Press, New York, 1997, pp. 43-50.
Kieferle, J. and Woessner, U. (2001): Showing the Invisible: Seven Rules for a new
Approach of using immersive Virtual Reality in Architecture. Proceedings eCAADe 19, pp 376-381.
Schönwandt, Walter (1986): Denkfallen beim Planen. Bauwelt-Fundamente; 74. Vieweg, Braunschweig, Wiesbaden.
Sowa, John F.: (2000), Knowledge Representation: logical, philosophical, and
computational foundations, Pacific Grove, Calif., Brooks/Cole, pp. 192-194. Weller, P. 1993: Interaction with Paper on the Digital Desk, Comm. ACM, vol. 36, no. 7,
pp. 87-96. Wössner, U., Schulze, J. P., Walz, S. P., and Lang, U. (2002): Evaluation of a
Collaborative Volume Rendering Application in a Distributed Virtual Environment. Proceedings of the 8th Eurographics Workshop on Virtual Environments (EGVE), May '02, ACM Press. 113-122. 2002.
Designing a Virtual Reality
Nyungar Dreamtime Landscape Narrative
Lorne LEONARD
1 Introduction
This article demonstrates the use of Virtual Reality simulation as a new pedagogical tool
for cultural landscape preservation and education. The case study examines an Australian
Aboriginal landscape conception known as the “Dreamtime”. To many Australian
Aboriginal nations, the Dreamtime is an important religious concept that describes the
meaning and creation of the cosmos; it is a complex system of signification. The prototype
virtual landscape focuses on the Dreamtime narratives from one nation, the Nyungars, who
are the indigenous nation from the southwest of Western Australia. Many forms of
Dreamtime narratives exist, and this project focuses on those that explore the landscape and
its creation. One narrative in particular, that describes the formation of the Gabbee Darbal
(Swan and Canning River) by an ancestral being called the Waugal, helped to inspire the
author to design and code a virtual reality landscape within the Cave Automatic Virtual
Environment (CAVE™) at the National Center for Supercomputing Applications (NCSA™)
at Urbana Champaign. This Nyungar narrative aided in creating a virtual landscape
narrative that helps to visualize Nyungar sacred landscapes severely affected by
colonization. A crucial principle explored by the author, was how to represent the Nyungar
culture, a culture nearly erased by colonization in a sensitive and accurate way. This
exploration, in conjunction with landscape narrative practices found within Nyungar
narratives, assisted in creating a set of design guidelines for forming the virtual landscape.
Furthermore, the methods of creating the scenes within the virtual landscape are explained.
2 Who are the Nyungars?
The Nyungars are the Aboriginal nation located in the southwest of Western Australia. In
an attempt to understand Nyungar concerns, perceptions and knowledge of the southwest
area, the author selected a specific landscape region known by the Nyungars as the Gabbee
Darbal and focused on the Whadjuk sub-nation area. Located centrally within these
landscapes is the city of Perth and its surrounding suburbs, where approximately 2 million
people currently live. Colonization has destroyed many Nyungar sacred sites/landscapes
and only a few survive. For many years, Nyungars and others have been fighting for
acknowledgement by the Australian government and the public concerning their cultural
heritage. The purpose of this project is to visualize a specific aspect of Nyungar culture,
Nyungar Dreamtime landscape narratives, in particular, the formation of the Gabbee Darbal
river landscapes. Many of these concepts are physically impossible to create and difficult to
explain without being immersed. To re-create these landscapes and concepts, the author has
created a computer program and designed a virtual landscape that allows users to
L. Leonard 2
experience immersion using the CAVE™. The virtual landscape design is not a replication
of the contemporary or past Gabbee Darbal landscapes but rather, a design that encourages
users to conceptualize these Dreamtime landscapes.
To discuss and represent Aboriginal issues and concerns as a non-Aborigine inherently
raises many complex issues, and to address all of these is beyond the scope of this paper.
To represent Nyungar culture, it is critical for one to challenge prejudices and stereotyped
beliefs. Ideally, the approach of this project would be to work with the Nyungar
community, to make personal connections and exchanges at all stages of this project. Due
to logistical constraints, this project relies on the published information that Nyungar
people have generated and shared with other communities, focusing on narratives that
relate to the Dreamtime. Furthermore, this project examines ways to represent these
narratives in conjunction with the author’s perceptions and experiences of growing up
within the study site region. It explores the interplay of the author’s cultural identity of
what it means to be ‘Australian’, with the Gabbee Darbal landscape being a fluid space for
negotiation. Presently, the stories seen and told amongst the Gabbee Darbal landscapes are
predominantly non-Nyungar. These landscapes have both cultural and natural history
implicit to Nyungars and non-Nyungars, and are continually generating narratives for
further and new types of narratives. Multiple narratives with objective and subjective
elements that interconnect at places throughout the Gabbee Darbal landscape are yet to be
told. These places are continually engaged in symbiotic and ongoing relationships with
people and the landscape. These extend well beyond its specificity and the borders of our
past and present ‘versions of time’. They are Dreamtime narratives.
3 The Dreamtime and Landscape Narratives
Australia’s landscape is ancient and unique. As one walks through such a landscape, one
can start to perceive a rich history and traces of one of the longest living civilizations, the
Australian Aborigines. Australian Aborigines did not build temples, but instead, they
revered the landscape by creating elaborate narratives about their culture and their
connections to the landscape. The Arrernte Aborigines ‘shared’ concepts about their
religious practices and values with anthropologists Spencer and Gillen, who translated
these concepts into the English language and conceived the term “Dreamtime”(Spencer).
Dreamtime reflects Aboriginal cosmologies and religions, and in particular, the creation of
the universe. Also known as “spiritual” or “mythical” beings, the ancestral beings epic
journeys created the universe. Many Aboriginal narratives describe the ancestral beings’
ability to transform many times into other forms, such as humans, animals, or inanimate
objects. A common ancestor amongst Aboriginal nations is the “rainbow serpent”. As the
rainbow serpent and other ancestral beings created the world, the process was ‘mapped’
onto the landscape. The Dreamtime realm is independent of linear time; rather it is another
dimension of reality (Morphy). For many Aborigines, the Dreamtime has never ceased to
exist, with no beginning and no end. The Dreamtime is the basis of the present, and
influences the future. With this belief, the present is as much a feature of the future as it is
of the past. The Dreamtime relates to space and time, referring to the origins and powers
that are located in places and things; they can be described as landscape narratives.
Designing a Virtual Reality Nyungar Dreamtime Landscape Narrative 3
3.1 Landscape Narratives
The Dreamtime connects events, sequences, memory, space, and other abstractions to the
more tactile aspects of place. They order and configure experience of space into significant
relationships, offering ways of knowing and shaping landscapes. To truly experience the
Dreamtime is to transcend boundaries and resonate with other dimensions of experience. In
a sense, the Dreamtime represents stories that all Australians participate in and these stories
shape the Australian landscape. They are a process of remembering and interpreting the
landscape; they give form to space and experience. To participate and follow the
Dreamtime is more than just listening to a story. As it tells of origins, explains causes,
marks the boundaries of what is perceivable and explores the territories beyond (what is
told), it is a narration (Potteiger). Furthermore, the Dreamtime consists of stories expressed
(means of telling) through the landscape, orally and through other forms of art media
(dance, paintings and film). For many generations, Aborigines have practiced mapping
landscapes into the very texture and structure of stories. Many of these Dreamtime
narratives discuss how the ancestors shaped the landscape, and in turn how places
configure narratives, as opposed to considering the landscape as only the background
setting. Thus, in the Dreamtime, the landscape is integral to the narrative and hence they
are landscape narratives (Potteiger).
3.2 Following Nyungar Dreamtime Narratives
To understand these narratives is to read the landscape through association, events and
stories that are part of the physical and non-physical form of the Australian landscape. To
‘read’ the landscape requires guidance. One way to navigate or follow an established
hierarchy within the Dreamtime in a spatial manner is through what some Aboriginal
nations refer to as song cycles or songlines. Songlines weave the ‘dreamer’, through the
landscape following ancestors’ events, enabling the dreamer to recount specific tales,
allegories or social narratives. As Aborigines (re)-create songlines, they place elements to
form sequences, plus they interpret empty spaces and create meanings; thus, Aborigines
become authors. In addition, the landscape informs Aborigines about cultural and natural
processes by recording these changes; in effect, the landscape tells a story. These changes
or effects may or may not be ‘natural’. For example, Nyungars used fire to hunt kangaroos
and have narratives describing such events (Bates). With fire, Nyungars changed the
landscape, creating places that tell new narratives. Often these narratives discuss the
progressive stages of actions or events of nature. Water, being essential for life, dominates
many of these narratives. The Waugal ancestor is the protector of water environments and
there are many Nyungar narratives describing how it creates water features (rivers, streams
and aquifers) that constitute and shape the Gabbee Darbal landscape. For example, the
following describes characteristics of water flow: “Noongar from out around Brookton and
York talk about how the Waakarl came out of the earth. It went different ways, making
tracks through the hilly country. Sometimes it went kardup boodjar (under ground) and
sometimes it went yira boodjar (over ground). The Waakarl’s kaboorl (stomach) pushed the
boodjar (earth) and boya (rocks, stones) into kart (hills). You can see the Waakarl’s path in
the shape of the boodjar (ground / land)” (Collard 2000).
L. Leonard 4
4 Design Strategy
Actions by the Waugal formed the basis to develop a design strategy, however as the
author is not a Nyungar, it became crucial to investigate ways to ‘represent’ Nyungar
culture. An essential and difficult task was to discover personal and others’ perceptions
about Australian people and place, and to question various versions of information
portrayed. It was important to reveal racist discourses and seek answers to questions in an
accurate, non-racist, and sensitive manner from the images and text that ‘represent’
Aboriginal culture. This project represents the ‘final’ stage of many such processes that the
author explored and developed. Briefly, in an attempt to be ‘accurate’, information
represented and used focused on the micro and specific interests of Nyungar culture,
keeping in mind the following limitations: (1) That information is interpretive and is an
artifice. (2) That the designer has a role and impact on guiding users’ perceptions, only
being able to represent information from a selection of Nyungar accounts that is perhaps
similar to other viewpoints. (3) To recall and critique that, at times some resources are the
result of individual subjectivity. (4) To beware of the hazards of the audience perceiving
specific experiences as ‘typical’ of an entire community, or how an individuals’ voice may
be perceived as the voice of the whole community. In this project, Nyungars appear
linguistically as agents in the production of knowledge and as an inspiration for creative
activity and interpretation. This project is part of an ongoing process of understanding the
ways Australian people create culture and history. It derives from and reacts against
historical representations and symbols of Aboriginality: finding new ways to ‘view’ and
understand the Gabbee Darbal landscapes using virtual reality, rather than emphasizing the
differences or the contrasts between people. The project aim is not to construct a
replication of the Gabbee Darbal landscape as it was before colonization or how it is today.
Rather, the aim is to explore the entangled memories of the past, the present and ultimately
the future and their relationship with the inter-subjective state of experience.
The methodology to formulate ideas or guidelines to design the virtual landscape was based
on 5 landscape narrative practices as discussed by Potteiger: Naming, Sequencing,
Revealing and Concealing, Gathering and Opening. The majority of participants using the
CAVE™ version of this project are likely to have limited knowledge about Australian
culture. Thus, it was important to create a strategy to introduce users to this knowledge.
The final design strategy consists of 7 distinct scenes to form the following sequence and
purpose: (1) Introduction to ‘Australia’ (Fig. 1), (2) Introduction to ‘Aboriginality’ (Fig. 2),
(3) Introduction to southwest of Western Australia (Fig. 3), (4) Abstraction of Nyungar
nation (Fig. 4), (5) Introduction to Gabbee Darbal landscape culture (Fig. 5), (6)
Abstraction of Waugal’s beginnings (Fig. 6) and (7) Interpretation of the Waugal’s
influence on the Gabbee Darbal landscape. The following discusses the application of
sequencing, revealing and concealing practices in the final scene of the virtual landscape.
Designing a Virtual Reality Nyungar Dreamtime Landscape Narrative 5
Fig. 1: Scene 1 Fig. 2: Scene 2
Fig. 3: Scene 3 Fig. 4: Scene 4
Fig. 5: Scene 5 Fig. 6: Scene 6
The main plot and navigation through the final scene of the virtual landscape follows the
formation of the Gabbee Darbal by the Waugal. The sub-plot of this narrative provides
meaning and orders events and actions that are part of a complex network of landscape
elements. The arrangement of such events within a sequence is important as it affects how
the reader interprets them. Due to the complexity and great number of actions and events,
some of them were ‘captured’ using symbols of concentric circles that denote sites in close
proximity to their geographic location (Fig. 7). Another concept behind the arrangement of
these symbols is to show layers of information that overlap, intersect and combine with
transparent layers to create and reveal a deeply complex and ambiguous narrative. For
example, in the final scene, the Waugal’s path is used to expose the above and below
drainage systems of these landscapes, to make transparent what is concealed. These spaces
L. Leonard 6
are divided further with 6 layers (representing the Nyungar seasons) to create a complex
space that reveals and conceals seasonal contextual information. Therefore, not only do
these strategies create an interesting space, but they also increase the complexity of
interpretation and meaning for the user (Fig. 8).
Fig. 7: Concentric Circles Fig. 8: Season Layers
Before designing and modeling any part of the virtual reality landscape, it was critical to
evaluate and establish a set of limits for the capabilities of the CAVE™ hardware. To
maintain high frame rates, the virtual landscape needed to be constructed in a way that
required minimum polygons, using computer graphic techniques and effects to achieve the
same visual effects as one would with higher polygon rates. In brief, some of the
techniques used include: (1) An octree structure for space partitioning and frustum culling
to reduce the amount of faces rendered at each frame. (2) Reuse of textures applied within
scene models. (3) Use of simple world model designs (planes for example) as opposed to
complicated geometrical structures. (4) Collision detection to restrict user movement within
the world. The virtual landscape consisted of 50,000 faces in its construction (created using
3D Studio™), and the average real time display frame rate is 50 fps, a slight frame rate
reduction from the ideal 60 fps (30 per eye). The computer program was written in C++
language and with the use of OpenGL® and CAVE™ libraries. As this project was
approached as a prototype, the program was purposely written using C++ and with the use
of OpenGL® to create a computer program that could not only work in the CAVE™ but
also on typical computers, thus making it accessible to a greater number of people.
5 Virtual Dreamtime / Dreamtime Machine
After finalizing a set of design strategies, the next step was to develop a master plan. The
plan involved the creation of symbols based on and followed by some Aboriginal groups to
tell a narrative or paint a ‘landscape map’. The purpose of these symbols was to reveal to
the user in an abstract expression, the transferal of information that takes place within the
Dreamtime landscape or, metaphorically in this project, a Dreamtime machine. Symbols
were created to designate landscape features and significant areas, and to map sequences.
Figure 9 illustrates the final design. There are 4 regions, (1) the Australian continent, (2)
Designing a Virtual Reality Nyungar Dreamtime Landscape Narrative 7
southwest of Western Australia, (3) Gabbee Darbal landscapes, and (4) the Universe web
connection.
Australian Continent (1) Southwest of
Western Australia (2)
Gabbee Darbal
River Landscapes (3)
Universe Web
Connection (4)
Fig. 9: Master Plan Design of Virtual Landscape
Within the region representing the Australian continent (1), a path guides the user from the
Sydney and Canberra region (‘the center of power in Australia’) to Uluru. Uluru represents
the center of Australia and is a significant symbol of Aboriginality. Next, the path guides
the user to the southwest of Western Australia (2), the area of the Nyungar nation. Change
of scale is common in Aboriginal paintings and has been adapted to scale the Nyungar
nation area in the master plan. The expansion allows more events and details to be included
in the design. Next, the path leads the user to a ‘leaf’, 1 of 7 leaves that constitute the
Gabbee Darbal landscape (3). The leaves form a circle, with the center being Uluru, ‘the
center of Australia’ and symbolic of its importance. One leaf indicates each Nyungar
L. Leonard 8
season (a total of 6) while one leaf represents an accumulation of all seasons. The main
concept of the leaves is to distinguish characteristics of the Gabbee Darbal landscape
during each season, and to visualize connections and associations with Nyungar
nomenclature. In addition, woven through all levels of the landscape was a web structure
representing the connections of the Universe (4). The landscape-painting layout consisted
of symbols that denote place (circles) and journeys (wavy lines). Each scene combines,
expands, merges and morphs two basic symbols to form unique spaces that resemble
symbols found in Aboriginal motifs. The next step in creating the virtual landscape was to
create models from this plan without exhausting the available hardware resources required
to operate the computer program at ideal frame rates. The first approach in this step was to
simplify the original design and to remove parts of the model seen for short periods only,
or not seen at all by the user. In addition, the design entailed using many curved shapes,
requiring the use of many polygons to maintain a smooth edge. Many of these shapes
were not integral to conveying important information to the user. Thus, the shapes were
simplified, using less curves and the use of textures to achieve the same effect. The
following describes how these techniques were applied to scenes 6 and 7, the journey of
the Waugal ancestor.
Scene 6 is positioned at the highest point within the virtual landscape model to symbolize
the part of the journey where the Waugal came down to form the hills and rivers. Having
this scene at the highest position not only symbolizes the magnitude of the Waugal’s
journey but also gives the user an impression of a story climax and a new path to follow in
the virtual landscape (Fig. 6). The final scene is an interpretation of the Waugal’s influence
and force upon the Gabbee Darbal landscape. Rather than modeling all 7 leaves from the
landscape plan design, the 7th was symbolically modeled as it represents all seasons, and
combines information in a way that is easier for users to understand. To aid in this process,
the Gabbee Darbal path was modeled and placed in the upper portion of the scene to
symbolically denote not only the Waugal’s journey, but also to encapsulate the scene’s
contents (Fig. 10). In addition, multiple paths that are textured with a warped serpent
image, intersect many significant places in the virtual world to demonstrate and
characterize to users the complexity of the Waugal’s actions and relationship with the
Gabbee Darbal landscape. These places of significance were marked using concentric
circles to denote sacred places (Fig. 11). The use of these symbols was an important design
strategy as it allows users to visualize the complexity, symbolism, and relationships
between people and place. Within this scene, the largest concentric circles denote important
stages of the Waugal path. The circles are color-coded: Blue represents people, red denotes
landscape features, and yellow refers to place names. The markers are placed relative to the
Gabbee Darbal rivers and according to their seasonal context. Another design strategy was
to model the seasons using ribbons to create new spaces by dividing the river landscape
and symbols into context of seasonal influence (Fig. 12). Each season layer was assigned a
unique rainbow color that merge at the scene’s ending to form a rainbow configuration at
the resting place of the Waugal (Fig. 13). Surrounding the Waugal’s resting place is
imagery of the Milky Way galaxy and a rendered image of the virtual reality landscape
design to symbolize the heavens of the cosmos. As the user moves forward, they return to
the first scene of the virtual reality world symbolizing that their journey is one of many,
and a never-ending cycle or rhythm of the Gabbee Darbal landscape.
Designing a Virtual Reality Nyungar Dreamtime Landscape Narrative 9
Fig.10: Gabbee Darbal path Fig.11: Concentric Circles
Fig.12: Seasons Fig.13: Resting Place of the Waugal
6 Project Future Directions
From initial user comments, it is clear that people felt connected in a spiritual way and
immersed in the final scene. Users commented that they felt like they were swimming
and/or dreaming. Having users immersed does not guarantee that they perceive the virtual
reality landscape scene as a religious concept (the Dreamtime) nor as a complex river
landscape network. What users learned and/or experienced was not documented and future
projects will consider documenting users’ perceptions. The original design of the virtual
landscape incorporated multiple songlines, however, the user only had one songline to
follow in this prototype. Such a design procedure is similar to the role of a director in
filmmaking, in that its aim is to convey meanings and to get users immersed in the
narration. However, the advantage of creating a virtual landscape is that the world scene
can be setup for users to manipulate the world as desired, to create an open virtual reality
landscape that answers the “what if” questions concerning the issues of design, knowledge
and experience. The virtual landscape created in this project is not an open landscape; users
are not able to manipulate or add to the world. Having such features will not only increase
the complexity in experiences and interpretations, but could assist in interpreting how users
perceive and experience the virtual landscape (what they learned).
The question of what to model is an important issue to address in future projects. Further
dialogue is needed with the Nyungar communities, to inspire further questions and create
guidelines that would help clarify what to model. A network structure with nodes that have
L. Leonard 10
a particular detailed landscape may provide more guidance and spiritual experiences
similar to those that the Nyungar community communicates through the Dreamtime. The
author is currently in the process of updating and creating the necessary tools to incorporate
these elements into the virtual landscape. Future projects aim to incorporate the Nyungar
community and other communities to create virtual tools to share and educate others about
their cultural landscape heritages. Virtual reality can be used to ‘map’ how these
communities interact, experience and understand landscapes. For example, one project the
author would like to pursue is to compare and contrast an unknown landscape such as the
Gabbee Darbal landscapes to a ‘well-known’ landscape, such as the Nazca Lines in Peru.
Such comparisons and research could assist in answering the following questions.
• How do users create new and/or use previous songlines in a virtual reality world?
• What parts of the landscape would they modify?
• What components of the landscape aid in creating their own songlines?
• When following another person’s songline, does the follower understand or mimic
the perceptions of the landscape in the same way as the songline creator?
Such a comparison could assist in further understandings the processes of cultural
landscape heritage, but assist in improving design solutions of navigation, perceptions,
model qualities (i.e. Level of Detail) required by users within virtual reality landscapes.
Note: This article was based on the author’s masters thesis. For further information the
author can be contacted at leonard@ncsa.uiuc.edu or at lorne_leonard@hotmail.com.
7 References
Bates, Daisy. (1985): Daisy Bates: The Native Tribes of Western Australia. Edited by
Isobel White. National Library of Australia, Canberra.
Bostock, Lester., (1990): The Greater Perspective: Guidelines for the production of
television and film about Aborigines and Torres Strait Islanders. Special Broadcasting
Service.
Chatwin, Bruce. (1987): The Songlines. Penguin, New York.
Collard, Len. (2000): The Waakarl Story. Catholic Education Office of Western Australia.
Langton, Marcia., (1993): ‘Well I heard it on the radio and I saw it on the television…’:
Aboriginal Film and Video (Section One). Australian Film Commission, Sydney.
Laurel, Brenda., Strickland Rachel. (1994). Placeholder: Landscape and Narrative in
Virtual Environments. ACM, Computer Graphics, 28(2): 121-127
Lawlor, Robert. (1991): Voices Of The First Day: Awakening in the Aboriginal Dreamtime.
Inner Traditions International Ltd, Rochester.
Morphy, Howard. (1999): Aboriginal Art. Phaidon Press Limited, London.
Spencer, Baldwin., Gillen, F.J. (1912): Across Australia. Macmillan and Co., Limited St.
Martins Street, London.
The Digital River Basin: Interactive Real-Time Visualization
of Landscape Processes
Douglas M. JOHNSTON
1 Introduction
Public debates on science-based policy continue to grow in breadth and intensity. In the
sector of land and water resource policy, substantial public uncertainty exists regarding
such topics as the nature and management of global warming, forest fires, urban sprawl,
and river systems. At the heart of the uncertainty is overwhelming complexity of
interactions within systems, avalanches of data from many sources, and mediation of
information within tight disciplinary boundaries.
We increasingly live in a digital world in which tremendous quantities of data are
produced, and a wide range of computer simulations are used to design and test products
and policies that affect public’s lives. Yet datasets and models about the world around us
have accumulated far faster than the abilities of citizens to make sense of them. The tools
for converting data into information still largely reside in the hands of professionals.
Digital datasets provide unprecedented potential for individuals to engage in self-directed
inquiry that can reveal interconnections and causal relationships not possible through any
other kind of medium. People increasingly are digitally connected, however, they rarely
engage in any kind of data inquiry and their digital experiences are largely mediated by
intermediaries that convert data into information, just as has always been the case with
books, newspapers, magazines, radio, and TV.
With support from the National Science Foundation, the Science Museum of Minnesota,
Illinois State Museum, St. Louis Science Center, and the University of Illinois/National
Center for Supercomputing Applications established the Mississippi RiverWeb Museum
Consortium. The Consortium in turn conceived of and created the Digital River Basin
(DRB) as a prototype for a novel, inquiry based learning experience. The DRB uses state-
of-the-art computer-based modeling and visualization tools to promote experiential
learning about the Mississippi River system through direct visitor interaction with real,
scientific datasets.
The DRB includes a large format, interactive display that portrays the St. Louis stretch of
the Mississippi River and three touchscreen consoles that allow visitors to construct and
control their own digital explorations of the river. Using real datasets, the DRB presents a
vivid and dynamic representation of the River and the processes that contribute to its
behavior and characteristics. The content includes river ecology, hydraulics and
management; and it introduces systemic principles for the visitor and allows visitors to
emulate scientific research methods as they examine the forces shaping this portion of the
Mississippi River Basin.
D. M. Johnston 2
2 Display Environment
The Digital River Basin is composed of several interdependent display environments
including a large format interactive display of an entire river reach with display insets of
related information , and satellite touchscreen display modules portraying the river in both
two and three-dimensional real-time rendering modes (Figure 1). Several factors drove the
development of these display settings. First, due to cost constraints, the display and
underlying technologies had to be assembled using stock commodity products. Second,
while the exhibit is also about the role of scientific computing and visualization in science,
traditional computer interfaces for many have failed to provide sucessful interaction
between the visitor and the exhibit environment. Third, the exhibit must maintain high
levels of interaction with both individual and small group user communities.
Figure 1: Digital River Basin (DRB) Configuration
Common Display
A digital display of an entire 30-40 mile river stretch is projected on a large (4x6 ft.) table
(Figure 2). A particle visualization dynamically portrays the flow of the river, while a
landcover backgound map provides locational context. Visitors individually or
cooperatively query available data for the river stretch by moving simple instruments
across the flat visual representation. The location and orientation of these instruments or
“tools,” is tracked by pattern recognition software using an overhead infrared video camera
The Digital River Basin: Interactive Real-Time Visualization of Landscape Processes 3
and an image capture card on the computer. The display software renders site-specific
information on selected river basin features such as place names, land cover, elevation, and
flood risk, as well as dynamics, e.g., channel flow and erosion as the visitor chooses
locations in the basin (Figure 3).
Figure 2: Common and Console Display Environment
Figure 3: Common Display Tools
Console Display
Placed in proximity to the common display, console display visitors navigate through and
explore a real-time 3d representation of the river basin (Figure 4). The 3d rendering uses
the same data as that in the common display but creates a user-centric view. The touch
screen consoles support two distinct but complementary modes of navigation. Visitors can
“jump” from location to location by touching a flat index map corresponding to the river
stretch depicted in the common display. Doing so takes them to the corresponding location
D. M. Johnston 4
in the 3d scene. Alternatively, visitors use physical controls to move directly within the 3d
scene. A joystick supports rotation and forward and back motion, while a throttle-like lever
controls elevation above or, when exploring the channel and other bodies of water, below
the surface. Visitors can fly high above the floodplain, then descend to and explore around
points of interest within the vicinity.
Figure 4: Console Display Interface
Other touch screen tools controls allow visitors to engage in activities such as taking
guided tours, sampling fish species, investigating rainfall runoff, channel flow, flooding, or
turbidity, and exploring local and system effects of human cultural features such as
navigation dams and levees. Visitors can also pilor a tow barge up and down the river
under different flow conditions with with different size tows (Figure 5).
Figure 5: River Pilot Simulator Activity
The Digital River Basin: Interactive Real-Time Visualization of Landscape Processes 5
3 DRB Structure
The digital river basin is structured as a modular system of interacting components with the
landscape acting as the organizing environment for free exploration of content.
Conceptually, the system is organized around three layers. At the lowest level are
databases of geographic data, model parameters, system status records, and configuration
settings. The intermediate level contains models and computer code for visitor interaction,
display management, database querying, and simulation modules. Activities and
interactions are defined here as both generalized and specific functions. The top layer is
represented by the visitors’ activities and experiences, which can be constructed in a
dynamic manner though visitor selection by location, tool operation, etc. The combination
of layers helps to ensure a rich and changing visitor experience but also permits
extensibility to other data or contexts.
Figure 6: DRB Schematic Structure
3.1 Data
Data used in the project can be classified as geospatial, simulation, and
interpretative/narrative. Data are used to construct the visualized environment of the river
basin for context, as well as to provide content for observation and interpretation.
Geospatial Data
Digital elevation data forms the framework for the virtual world visualized in the DRB.
DEMs were derived from standard USGS 7.5 minute data series. Bathymetric data for the
main river channels was obtained from the U.S.A. Corps of Engineers who are responsible
for maintaining the navigable portions of the riverways, and flood control structures such
as levees and floodwalls (dams along the river system are used for maintaining navigation,
not flood control. The DEM data were aggregated for use in surficial modelling tasks such
as erosion and flood runoff, but performance issues required the data to be post processed
D. M. Johnston 6
using triangulation decimation and tiling of data fields. Decimation was based on gradient
and other elevation derivatives to preserve important topological structures such as river
bluffs and channels (Figure 7).
Figure 7: Underlying Geometric Structures
Landcover data were derived from a variety of sources. For the St. Louis and Illinois
museum segments, USGS Gap Analysis Project data sets were used. For Minnesota,
metropolitan and state databases were merged to create a regional landcover. As each state
developed it’s own classification system, a joint landcover classification system was
derived. Soils data were obtained from USDA STATSGO soils data, and are used
primarily in erosion modelling tasks. Other data included river flow and stage gauge
records, fish sampling stations, water quality observations, navigation features (locks,
dams, buoy locations) and other relevant information (species abundance, range, cultural
feature identification and location, etc). Rarely has such a mix of data been assembled into
a single environment.
Simulation Data
Each simulation model resulted in data outputs to be used in visualization of landscape
processes. In general, the models produced both scalar and vector data sets.
The hydrodynamic simulations produced scalar data such as water surface elevation flow
depth and sediment concentrations. These data sets were visualized using both geometric
models (water surface elevation) as well as color gradients, along with numerical reporting
of values (via tools and display functions).
The hydrodynamic simulations also produced vector data in the form of velocity fields for
channel and overland flows. Vector data were visualized using multiple representations
including glyphs and particle simulations, as well as numerical reporting.
The Digital River Basin: Interactive Real-Time Visualization of Landscape Processes 7
2D and 3D Models
In this first phase of development, 3d models were used primarily to render land use
variations and also to provide local landmarks. Examples include the St. Louis Gateway
Arch, bridges, and recognized buildings. Other 3d objects include the barge models used
in the river pilot simulator, and simple building structures to represent urban areas. Other
2d objects were used as “scenic billboard” including trees and icons representing narrative
points of information within the landscape (Figure 8).
Narrative Data
In addition to the 2 and 3 dimensional data, numerous datasets were created to provide a
deeper layer of information at specific locations. A primary feature for navigating this
narrative data included the use of icons within the landscape coordinated through the
concept of a field guide. The field guide provides greater detail via text, images, and
graphics including maps and animations (Figure 9). Animations were created to describe
numerous processes occurring in the landscape at time scales beyond the real-time
experience of the visitor. Panoramic images were georeferenced to provide a photographic
image of specific locations within each museum’s stretch of the rivers. Visitors use the
navigation controls to rotate themselves through the image (and the virtual space) so the
image and orientation of the visitor in the DRB remain consistent. Narrative data were
developed for ecosystem features, cultural features, and geomorphic processes occurring
within the river basin.
Figure 8: 2D and 3D Objects and Textures Figure 9: Narrative Data in Context
3.2 Simulation Models
The heart of the dynamic data consists of simulations (RMA2 and FEWSMS) of two-
dimensional flow in river channels and floodplains. A 2d finite element representation of
the floodplain topography and bathymetry was used. Boundary conditions for simulations
where drawn from historic observed stream flow data. The flow simulations were
undertaken to portray the nature of flow in the river and the implications of variation in the
D. M. Johnston 8
conditions of flow (Figure 10). Thus numerous scenarios were run for each museum,
including flow simulations under different weather conditions (low, average, and high
flows), and under different management strategies (with and without dams, with levees
added and removed) (Figure 11). Sediment transport in the rivers was models with the
hydrodynamic flow simulations using sediment load data collected from observation
stations within the river basin. Because of the size of the reaches (20-40 miles) and the
need for high-resolution for visualization and the river pilot simulator, the model execution
stretched the resources of the modelling software. Typical problem sizes for these
simulations were 30-50,000 elements with up to 100,000 nodes. Simulation times ranged
from minutes to hours, depending on model convergence properties, etc.
Figure 10: Scenario Management Example.
Figure 11: Output of 2d Finite Element Flow Simulation
Flow velocity
and extent
without
levees
Flow velocity
and extent
with levee
The Digital River Basin: Interactive Real-Time Visualization of Landscape Processes 9
Other simulations include erosion/deposition modelling using a stream-power based soil
detachment and deposition formulation (USPED, Mitasova, et al. 2000), sediment transport
in the river channel, and a tow simulation using basic free-body motion equations.
3.3 Visualization
Interaction with and visualization of the landscape, static and dynamic data forms the core
challenge of the experience. Balances between detail, level of realism, and modes of
navigation demanded considerable discussion and debate. With the landscape acting both
as the object of the visualization and the means of communication (a virtual gallery)
numerous issues emerge with respect to computational performance, visitor navigation and
orientation, and identification of information resources.
Performance objectives for the DRB included rendering near 15 frames per second for
smooth action and response to visitor actions at resolutions of 1024x1280 pixels. User
induced changes to the display environment required the ability to swap data sets in and out
of the graphics environment from random access memory and disk cache. The
visualization environment is built on SGI’s Performer graphics environment
(http://www.sgi.com) in addition to considerable amounts of custom C++ code, and is
readily ported to CAVE (immersive visualization, http://cave.ncsa.uiuc.edu) environments.
The museum exhibits, however, are not stereoscopic, and not photorealistic. Stereo
imagery was precluded from both cost and usability issues. With thousands of visitors,
wear and tear and public health concerns dictated that the user not be required to wear an
apparatus. The objective of group interaction and socialization also precluded personal
display equipment.
Features employed to achieve the performance goals include terrain mesh decimation,
liberal use of textures to enhance realism without increasing geometry, tiling and levels of
detail, far clipping planes and atmospheric effects as well as a recognition that high
resolution geometry or collision detection (except for the terrain surface), shadow casting
and other techniques were not necessary to convey the content of the data. For example, it
was decided not to use aerial photography as part of the exhibit because it was too realistic
and would distract users by having them search for familiar landmarks instead of exploring
other available content.
Research issues in visualization of 3 dimensional spaces include loss of orientation,
difficulty in spatial judgment, and other factors (Ellis and Johnston, 1999) and influenced
design considerations. Previous experience (Johnston, 1998) indicated the desirability of
maintaining exo-centric perspectives while navigating in the 3d space, thus an index map
with the user’s location and direction is provided on the console display. In addition, the
location of each console user is presented on the common display to reinforce orientation
and visitor interaction. Other issues in interaction include selection of objects in 3d space.
The DRB provides several modes for selection. One mode consists of a “sphere of
influence”, that is, a feature becomes active when the visitor approaches it (or its
iconographic representation). Most narrative features are activated in this manner.
Alternatively, visitors may touch icons in the landscape to invoke actions. A 3d selection
algorithm is employed to enable visitors to select both near and distant objects. Finally,
objects can be pre-selected via a “tour”. Selected features are displayed both in the 3d
D. M. Johnston 10
display and on the 2d index map. The visitor may touch the index map to jump to and
activate the feature, or can navigate to the feature and activate it as described previously.
4 Results
Embedded in the Digital River Basin is a comprehensive set of data, experiences, and
interactions. It serves as a prototype for a naturalistic experience of virtual environments
blending familiar concepts of mapping, and gaming technologies to create an intuitive
means for exploring a landscape unconstrained by physical bounds of space and time.
As a museum exhibit, the Digital River Basin has proven to be very successful, both in
formal and informal evaluation settings. Exhibit evaluators have observed unprecedented
levels of engagement both in terms of attention time and social interaction within and even
between visitor groups. While the learning objects for the exhibit are being met to varying
degrees, indications are that the direction of learning is correct and that further refinement
of visualizations and interactions will enhance these further. At present, the depth and
breadth of information available is somewhat daunting to casual visitors.
5 Conclusions & Outlook
While created for a museum visitor experience, the DRB is designed to be generalized as a
data visualization and interaction environment for a wide range of landscape-scale problem
domains. An overriding goal for the DRB was that it follow a modular, hierarchical object
design that can be adapted to defined stretches of the river, and which can also be
extensible to accommodate further data and computer simulation modules at a later stage.
As such, the DRB environment is able to incorporate mixed datasets, more detailed data
and visualizations, and novel interfaces through which visitors can query the data via
combinations of physical and “virtual” tools and navigate seamlessly from over or on the
floodplain to the river’s surface and beneath. In achieving increased robustness and
generalization, the DRB will be well positioned to serve as a medium for professional and
public discovery and discussion of landscape issues.
6 References
Johnston, D.M. "TRACES: Revealing Nature Through Models of Landscape Dynamics."
Landscape Journal.,p 4-5, 1998
Ellis, Christopher and D.M. Johnston. "Qualitative Spatial Representation for Situational
Awareness and Spatial Decision Support”. Lecture Notes in Computer Science. 1661:
449-460. 1999.
Mitasova, H., Mitas, L., Brown, W. M., Johnston, D., 2000, Terrain modeling and Soil
Erosion Simulation: applications for evaluation and design of conservation strategies
Report for USA CERL. University of Illinois, Urbana-Champaign, IL
Real World / Virtual Presentations: Comparing Different
Web-based 4D Presentation Techniques of the Built
Environment
Joseph BLALOCK
1 Introduction
The World Wide Web has had a great effect on the display and retrieval of information
since its inception in the 1960’s, and earlier, and especially with its wide spread use in
early 1990’s (www.w3.org). The Web, while mostly two dimensional (2D) in nature, has
the ability to present three dimensional (3D) and four dimensional (4D) data and images.
Examples of the fourth dimension include time-based media such as movies, animations
and interactive worlds.
Computer modeling and animation lets the user experience an un-built environment prior to
its actualization. One can see in 3D and 4D how a place will feel without having to
construct it. Examples of 3D use include Repton’s pioneering work utilizing perspectives
to represent landscapes to clients (Repton, 1803), and Photomontage, model making and
model scopes to represent and communicate a space. Utilizing model scopes a user is
becoming more interactive by moving thru the model at human scale and is allowed to
explore at will (Appleyard, 1977).
What about works that have been built? How could one experience a “real” place without
being there? This paper will compare and contrast different methods of experiencing a
built place through a web-based environment.
The examples shown in the paper and the survey conducted involve the University of
Virginia Lawn, located in Charlottesville, Virginia, USA. This is a portion of the work that
is being conducted by the American Grand Tour Project (Fig. 1,
www.americangrandtour.org) The American Grand Tour is endeavouring to allow visitors
to virtual visit real built works of importance. This space is of importance for several
reasons. Firstly, it is one of the first examples of campus landscapes in the United States.
Secondly, it has a good relationship to its original environment. Thirdly, Thomas
Jefferson, fifth president of the United States, Architect, and Founder of the University of
Virginia, designed it. Lastly, it is a well-known landscape, at least peripherally from
history classes.
J. Blalock 2
Fig.1: www.americangrandtour.org
2 Presentation Formats & Methods
2.1 Presentation Formats Used
The presentation of the University of Virginia Lawn was completed with a series of
different media types. Media was selected that was easy to create or produce and also easy
to navigate and experience. It was a conscious decision to limit the examples shown to
items that could be produced with average hardware, software and greyware (brain power).
The experience of the visitor was also considered. The media choices were limited and not
exhaustive or exhausting, able to be easily navigatable.
The presentation formats chosen were:
A series of still photographs throughout the site were used
An Apple QuickTime VR (QTVR) panorama
QuickTime movies showing video clips from various portions of the Lawn
Animations were created/modified of the University of Virginia Lawn and related
buildings using Form Z and greatbuildings.com models.
Virtual Reality Mark-up Language (VRML) files were created directly out of this
Form Z model.
2.2 Presentation Formats Not Used
It was determined during development, (and after its creation), that the Shockwave file
provided no extra benefit over the VRML model and player. It was also determined to
limit animations to low-resolution, and trees to “lollypops” to minimize download times.
Comparing Different Web-based 4D Presentation Techniques of the Built Environment 3
2.3 Rationale Behind Format Design
Universally acceptable and free products for “visiting” the site were utilized. Therefore
VRML and QuickTime were used because players and browser plug-ins are free, easy to
install and work on most operating systems. Since 1994, VRML has proved to be an open
standard that could be easily viewed and disseminated across the internet (Hartman &
Wernicke, 1996). The ability to navigate space at your own pace can accelerate cognitive
understanding as one does not have to interpret a static 2D image into 3D (Peri, 2000). The
limitations of VRML and Animations as presented were done for file size and download
considerations. Nothhelfer (Nothhelfer, 2002) warned that synthetic, created, landscapes
gravitate toward one of two areas: They are created for immersive environments such as
game engines and thus require low polygon counts (less detail) or are more realistic and
require special hardware and software such as flight simulators and CAVE environments.
VR environments that are not necessarily immersive has advantages in that it is easier to
use, does not need special equipment to run it and by being “more common” is easier to
develop (Dorta & Lalande, 1998).
2.3 Interface
For this experiment and the purpose of this paper, the survey was entirely viewed and
scored digitally on the web page (Fig 2). Questions that pertained to the method
demonstrated were associated on the same page. The interface design was limited in order
to not detract from the content (grey background, radio buttons). The sequence was
presented from simplest and most static to more interactive.
Fig. 2: Sample survey page
J. Blalock 4
2.3 Survey
The survey asked the following questions:
Still Photos (page 2)
Rate the ease of use of this method (1 = very difficult to 5 = very easy)
Did this method adequately describe this existing site? Yes / No
QTVR panorama (page 3) (Fig. 2)
Rate the ease of use of this method (1 = very difficult to 5 = very easy)
Did this method adequately describe this existing site? Yes / No
Of the methods demonstrated thus far, which do you prefer?
Videos (page 4)
Rate the ease of use of this method (1 = very difficult to 5 = very easy)
Did this method adequately describe this existing site? Yes / No
Of the methods demonstrated thus far, which do you prefer?
Animations (page 5)
Rate the ease of use of this method (1 = very difficult to 5 = very easy)
Did this method adequately describe this existing site? Yes / No
Of the methods demonstrated thus far, which do you prefer?
VRML (page 6)
Rate the ease of use of this method (1 = very difficult to 5 = very easy)
Did this method adequately describe this existing site? Yes / No
Of the methods demonstrated thus far, which do you prefer?
Overall (page 8)
What was your prior knowledge, if any, of the existing site portrayed in this survey? (1 =
none to 5 = very familiar)
Which method did you prefer to learn about an existing site? Why?
Which method, if any, did you have difficulty with? How?
Are you a student or professional in an environmental design field (Landscape Architect,
Architect, Planner, etc.)? Yes / No
Rate your ability with computers necessary to browse the Internet. (1= no computer
knowledge to 5 = very comfortable browsing the Internet)
What was your connection speed? Modem, Cable, T1
Male / Female
Age
Comparing Different Web-based 4D Presentation Techniques of the Built Environment 5
3 Preliminary Results
The survey work is ongoing. Results to date suggest that user preferences are split between
QTVR and Video clips to learn about “real” built places. The best preferred method
contains both QTVR as well as video. Some users saw an advantage to the VRML
interface. It allowed them to navigate at their own pace and explore. The main difficulties
with the VRML world were that it required an extra plug-in that some were not able to
install and it was viewed as cumbersome for those who were not familiar with it. While
research suggests that interactivity is recognized as preferred by users (Van Maren &
Verbee, 2000), users suggested that the trade off between interactivity and realism was not
advantageous.
4 Conclusions & Outlook
According to the sample to date, end users prefer photo realistic representations to
experience a built work. While 3d models, animations and VRML worlds are useful tools
for un-built works, photo realistic images provided a consistent and comfortable look and
feel. The models provided too little detail, and the simplification of the site (done mostly
for download times) was distracting.
Suggested future research areas could include the design of environments that do not
require plug-ins or utilize universal plug-ins. Many users refused to download the players
necessary or had conflicts with plug-in (QuickTime vs. Windows Media Player). Perhaps
Shockwave or Flash environments, while originally discounted for the amount of effort
needed to produce, can become universal containers for information.
A comparison of different file size/ polygon counts for animations and VRML and their
comparison with video and QTVR. What level of download discomfort will a user endure
to view a higher resolution model? Will it compete with a video clip or QuickTime
panorama? Ervin (2001) suggested that omission of details makes sterile landscapes. He
asked two appropriate questions: How much reality is needed and how much abstraction is
allowed?
Media and data rich sites vs. 3D environments: A data rich site might have drawings,
Photos, QTVR’s, Video clips, etc. embedded in a page or as links off of the page. A data
rich 3d environment may rely on a common interface such as QTVR or VRML and have
hotspots and links associated with the image. One example is the Clara browser by spatial
knowledge which allows the user to bring in 2D web pages into a 3D world using VRML
(Fig. 3). There you can see, explore, organize and open information spatially
(www.spatialknowledge.com). One could also navigate thru a space using VRML or
QTVR, come upon a link that could tell you more about a piece of what you see thru text,
an enlarged detail view, a virtual ruler to measure details and objects in the VRML
environment, links to a manufacturer, a video clip, etc. or come across a virtual tour guide
or avatar that could access a database for answers to your questions (Snyder & Paley,
2001).
J. Blalock 6
Fig. 3: Clara browser by Spatial Knowledge
5 References
Appleyard, D. (1977): Understanding professional media: Issues, Theory and a Research
agenda. Institute of Urban and Regional Development, University of California,
Berkeley, Reprint No. 150 from Human Behaviour and Environment 2, 43-88.
Dorta, T. & Lalande, P. (1998). The Impact of Virtual Reality on the Design Process.
ACADIA 1998: Digital Design Studios: Do Computers Make a Difference? 138-161.
Ervin, S. (2001): Digital Landscape and Modeling and Visualization: A Research Agenda
Special Issue, Our Visual Landscape. Landscape and Urban Planning, 54, 49-62.
Hartman, J., & Wernecke, J. (1996): The VRML 2.0 Handbook: Building Moving Worlds
on the Web. Addison-Wesley, New York.
Nothhelfer, U. (2002): Landscape Architecture in the Reality-Virtuality. Trends in GIS and
Virtualization in Environmental Planning and Design: Proceedings at Anhalt University
of Applied Sciences, 2002, 19-23.
Peri, C. (2000): Exercising Collaborative Design in a Virtual Environment. ACADIA
2000: Eternity, Infinity and Virtuality in Architecture, 63-71.
Repton, H. (1803): Observations on the Theory and Practice of Landscape Gardening:
Including Some Remarks on Grecian and Gothic Architecture. Taylor, London;
Phaidon, Oxford, 1980 (facs.).
Snyder, A. B. & Paley, S. M. (2001): Experiencing an Ancient Assyrian Palace: Methods
for a Reconstruction. Reinventing the Discourse: How Digital Tools Help Bridge and
Transform Research, Education and Practice in Architecture, 62-75.
Van Maren, G., & Verbree, E. (2000): Karma-vl: 4D GIS and virtual reality for urban
planning. GIM International, June 2000: 34-37.
Developing Techniques to Visualise Future Coastal
Landscapes
Simon JUDE, Andrew JONES, Ian BATEMAN and Julian ANDREWS
1 Introduction
With sea levels predicted to rise by up to 88cm by the year 2100 (CHURCH et al., 2001),
coastal managers are beginning to consider the use of ‘soft’ approaches to defend the coast.
Unlike traditional ‘hard’ forms of defence, these use natural processes and landforms to
protect the coast. Unfortunately though, not only do such interventions have the potential
to cause conflict, but also to have large impacts on coastal landscapes. Therefore if these
new approaches are to be accepted by members of the public then coastal managers must
involve them in participatory decision-making processes. By doing so, the potential for
conflict and opposition to plans may be reduced. However, coastal managers are often
criticised for failing to involve the public in decision-making processes, and of only
informing them of decisions once they have been made. In relation to this the United
Kingdom Government have recognised that significant difficulties are associated with
facilitating public participation in management decisions, and have called for the
development of innovative communication techniques to improve the situation (MAFF,
2000). Likewise, similar calls have also been made by the European Union through their
Demonstration Programme on Integrated Coastal Zone Management (ICZM) (BELFIORE,
2000; KING, 1999).
A number of possible solutions that may assist in widening public involvement in coastal
decision-making and in the dissemination of management information have been
suggested. These have included the use of traditional forms of media such as information
leaflets, maps and video, accompanied by public meetings, exhibitions and consultation
exercises to gain feedback regarding proposed coastal management schemes. However, of
most interest have been calls by those such as KING (1999) to develop new electronic
means of communication to assist in the deliberative process.
In terms of developing new communication methods for use by coastal managers we
believe that visualisation techniques provide an opportunity to aid and develop public
involvement in coastal zone management. To illustrate this, this article presents some of
the findings from ongoing research that shows that it is now possible to produce realistic
visualisations of different coastal management policies. Furthermore some of the
difficulties associated with visualising coastal environments that have been encountered
will be described together with some suggestions for future research to overcome them.
S. Jude, A. Jones, I. Bateman and J. Andrews 2
2 The North Norfolk Coast
To develop the visualisation methodology two small project-level study sites located along
the north Norfolk coast at Brancaster and Holme-Next-the-Sea (Holme) on the east of
England have been used. This is a low-lying barrier coastline of high scientific, economic
and recreational value that is subject to a wide range of conflicting interests that exist
between those wishing to protect the coastline and those wanting economic development.
Furthermore, in recent years such conflicts have been exacerbated by the increasing
concerns about the potential impacts of future sea level rise on the coastline. This is
because it is highly vulnerable to North Sea storm surges (THUMERER et al., 2000) that
caused widespread flooding in 1953, 1973, and more recently in 1993 and 1996. As a
result of these concerns new defence methods such as managed realignment are being
discussed for protecting this section of coast including the reversion of internationally
important freshwater habitats and nature reserves to saltmarsh (Clayton, 1993; Andrews et
al., 2000). In terms of the study sites at Brancaster a managed realignment scheme has
recently been completed, whilst at Holme a number of realignment options are under
consideration.
3 GIS Database Construction and the Assessment of Future
Coastal Change
The first stage of the research involved the development of an extensive GIS database.
This contained data from a range of organisations involved in managing the coast, and was
supplemented by commercial products from the UK national mapping agency, the
Ordnance Survey. These included the Ordnance Survey’s Land-Line.Plus 1:2,500 large-
scale vector data and Land-Form PROFILE 10m resolution DEM products. Fortunately
one of the advantages with studying such a scientifically important and vulnerable section
of coastline was that extensive data was available from a range of sources due to the
comprehensive monitoring programmes managed by the Environment Agency and also
from academic research projects.
Once the GIS database had been created a methodology for assessing how the sites would
change in the future was developed. Not only did this account for management
interventions, but also future sea level rise and historical coastline change. This employed
historical coastline change data including historical maps, aerial photography, and coastal
monitoring data from which past patterns of shoreline change could be identified. This was
complemented by future management intervention information provided by the
Environment Agency, whilst predicted sea level rise was calculated using the Model for the
Assessment of Greenhouse-gas Induced Climate Change (MAGICC), (HULME et al.,
1995) together with isostatic change values for land levels in Eastern England
(SHENNAN, 1989).
The Ordnance Survey’s Land-Line.Plus large-scale digital line data was used for the
detailed visualisation work and required conversion to a polygon topology to allow
landcover attribute data to be incorporated. One of the difficulties encountered with this
data was that the dynamic nature of the coastline resulted in many of the shoreline features
Developing Techniques to Visualise Future Coastal Landscapes
3
at the study sites either being missing or in different locations in the Land-Line.Plus data
when compared to recent aerial photography for the sites. To overcome this updating of
the Land-Line.Plus data was required based on the aerial photography, a process that was
extremely time-consuming. To further complicate this, whilst commercial aerial imagery
was available from which the data could have been updated it had been collected at
unknown tidal states. For example, at Holme imagery from alternative flight runs at
different tidal states had been mosaiced. As a result the imagery was unsuitable for the
updating of those geomporphological features in the intertidal zone that had changed since
the Ordnance Survey had last surveyed the site. To overcome this, specially commissioned
aerial photography collected by the Natural Environment Research Council’s Airborne
Remote Sensing Facility at low tide for the study sites was used to update the Land-
Line.Plus data.
Once the land-Line.Plus data had been updated, landcover attribute data was manually
assigned to each of the polygons using a combination of three separate landcover data
sources. These included colour aerial photography, 5 metre resolution Compact Airborne
Spectrographic Imagery (CASI) classified for the intertidal zone and the Centre for
Ecology and Hydrology (CEH) Landcover Map of Great Britain. This produced
comprehensive land cover polygon dataset representing the sites.
The landcover coverages representing the sites following the management interventions
were produced using two alternative approaches. For Brancaster, plans were obtained from
the Environment Agency and digitised and appended to the Land-Line.Plus coverage
illustrating how the site may look once the scheme has matured (TYRRELL and DIXON,
2000). This information was combined with the results of sea level rise assessments and
information on historical coastline change to illustrate how the site would change because
of the scheme. For the Holme site because detailed management plan data was unavailable,
details of possible management interventions were used as the basis of the visualisations.
This was combined with the sea level rise and historical coastline change information to
produce visualisations for a number of alternative management scenarios, including a
partial realignment of the site which is presented here. Changes in terrain at the sites
resulting from the proposed management interventions were represented by reprofiling the
DEMs where details of terrain changes were available from the Environment Agency.
4 Visualisation Production
Visualisations for the study sites were produced using two techniques. Firstly interactive
visualisations were produced using ArcScene in ArcGIS, providing 'fly-through' Virtual
Reality Modelling Language (VRML) experiences. Secondly, static visualisations were
produced by exporting the GIS results into World Construction Set (WCS), a photorealistic
rendering package from 3D Nature. The two methodologies were chosen to allow an
assessment of their respective roles in widening public understanding of future coastal
management schemes.
The production of visualisations using ArcScene involved creating a Triangular Irregular
Network (TIN) DEM from the Land-Form PROFILE DEM, over which the landcover
S. Jude, A. Jones, I. Bateman and J. Andrews 4
coverage was draped to create a 3D scene. Sea defences and buildings were added as
separate coverages, allowing them to be extruded to produce 3D surface features. For
visualisations of the site following the management interventions, the reprofiled DEM was
converted to a TIN, and the landcover coverage representing the future site state used as the
drape coverage, with the buildings and defences added. ArcScene allows the visualised
data to be queried in the same way 2D GIS data can, and provides facilities to enable the
user to navigate around the 3D scene in real-time. The 3D scenes were exported as static
images and as VRML files for viewing in any Web browser equipped with a suitable plug-
in.
In contrast to ArcScene, WCS allows photorealistic visualisations to be generated, and has
the advantage for many GIS users that ASCII DEMs and ArcView shapefile coverages may
be imported and used as the basis for these visualisations. The first stage in creating the
WCS visualisations involved importing the DEM data to provide the base terrain. One
advantage with WCS is that it permits the generation of detailed terrain features and
allowed the generation representations of sea defences from defence heights and cross-
sections as provided by the Environment Agency, and also to create creeks and pools. The
second stage involved importing individual ArcView shapefile coverages to which colours,
textures and vegetation models were applied. The third stage of the work was the addition
of 3D building objects. Due to data constraints generic models representing houses, barns,
sheds/outbuildings, glasshouses and churches were used.
Although WCS produces static images, a series of images along a camera path can be
rendered, allowing the creation of an animation, although the main drawback with this is
the long rendering process required to create the images. For example, over 40 hours of
rendering was required to create the images used to produce a 33 second animation for the
Holme site. However, once the images had been rendered, the creation of AVI files using
QuickTime was very simple, the main issue being the need to balance the level of
compression to create small AVI files, whilst retaining the detail from the original images.
5 The Visualised Landscapes
The visualisations produced using ArcScene and WCS clearly illustrate how the managed
realignment schemes would affect the landscapes at the two sites. At Brancaster the
visualisations suggest that the realignment scheme will fit in well with the surrounding
landscape, although the construction of the set-back defence which comprises of an earth
flood embankment has a large visual impact on the site because of its height (Fig. 1 and 2).
However if the partial realignment was to go ahead at Holme the visualisations show that it
would have a considerable impact on the landscape and recreational amenity of the site.
This would be caused by the breaching of the dunes that presently protect the site leading
to the creation of extensive areas of muds and pioneer saltmarsh between the dunes and the
new set-back defence (Fig. 3).
In terms of the visualisations produced, obvious differences between the ArcScene and
WCS visualisations are apparent, especially in the level of detail; the former (Fig. 1) being
more stylised in comparison to the latter (Fig. 2 and 3). Likewise, the limited functionality
in ArcScene resulted in crude representations of 3D features such as defences (Fig. 1),
Developing Techniques to Visualise Future Coastal Landscapes
5
produced by extruded shapefiles, whilst WCS rendered them in greater detail (Fig. 2).
However, there is a trade off between time and detail, with the low detail ArcScene
visualisations able to be produced quickly, whilst the WCS visualisations being much more
time-consuming to render.
As well as producing static images a number of different types of visualisations were
created and these included the VRML files produced by ArcScene that may be viewed by
users across the Web by using a browser equipped with a suitable plug-in such as
CosmoPlayer. It was however found that the VRML output suffers from two limitations.
Firstly, the VRML files created for large sites such as Holme were too large to be viewed
on a normal desktop PC once they were created. Secondly, as can be seen from the streaks
in the foreground on Fig. 4, the VRML code has difficulties representing certain terrain
changes, which has an adverse impact on the quality of the view in the browser.
Alternatively the animations produced using World Construction Set can be viewed using
any package that can view AVI files. The AVI files were tested using QuickTime and
Windows Media Player which allowed the animation to be played, paused or manually
controlled using simple controls (Fig. 5).
Fig. 1: Visualisations of the Brancaster site at present (left) and following the managed
realignment scheme (right) created using ArcScene.
S. Jude, A. Jones, I. Bateman and J. Andrews 6
Fig. 2: A WCS close-up view of the Brancaster West Marshes site before (left) and after
the managed realignment scheme (right).
Fig. 3: WCS visualisations of the Holme site at present (left) and in 2022 following a
partial managed realignment (right).
Developing Techniques to Visualise Future Coastal Landscapes
7
Fig. 4: An example of the VRML output
created using ArcScene
Fig. 5: Viewing the AVI animation
created for Holme using WCS
and QuickTime in Windows
Media Player.
6 Difficulties Associated with Visualising Coastal Environments
A number of difficulties were encountered with creating the visualisations that are specific
to coastal environments. Possibly the greatest problems are caused by the dynamism of the
coastal zone, because digital data such as Land-Line.Plus quickly becomes out of date due
to the rapidly evolving coastline. As a result, data editing and updating which is time-
consuming, and therefore a costly process for potential users to conduct becomes
necessary. Whilst this situation may improve with the introduction of ‘live’ digital
databases such as the Ordnance Survey’s MasterMap product, which is continually
updated, this may not represent a total solution to the problem because many coastal areas
such as the north Norfolk coast are rural in nature and may not be surveyed regularly.
The age limitations identified with the Land-Line.Plus were also evident with the Land-
Form PROFILE DEM provided by the Ordnance Survey. Particularly at Holme the rapid
erosion that occurred during the mid 1990s along the site’s foreshore and dunes was not
reflected in the terrain data. Unfortunately this is a more challenging problem to rectify
without conducting extensive field surveys to create a new DEM and meant that the terrain
limitations in the visualisations simply had to be acknowledged. A further drawback
encountered with the use of the 10m resolution Land-Form PROFILE DEM was that whilst
it provided an excellent terrain surface onto which features can be added, its low vertical
and horizontal resolution meant that some coastal features such as small saltmarsh creeks
and dunes were not evident in the DEM. One means of overcoming this in the future may
be to use high resolution Light Detection and Ranging (LIDAR) DEMs. Unfortunately at
present LIDAR DEMs are expensive, not widely available, and their use in creating
visualisations is constrained by computer processing limitations.
One of the greatest challenges encountered during the production of the visualisations
involved the representation of terrain changes at the sites. Even where detailed information
regarding how surface elevations would change was available the reprofiling of the terrain
was found to be problematic. For example, whilst terrain changes are possible where only
simple reprofiling of the DEM is necessary to create an area with a new elevation and a flat
profile, the creation of cross-section profiles such as a dune system are more difficult to
S. Jude, A. Jones, I. Bateman and J. Andrews 8
create. In situations where complicated coastal features such as spits like Blakeney Point
on the north Norfolk coast (Fig. 6) need to be visualised by coastal managers this could
represent a significant challenge. Here not only would the changes in the spit morphology
have to be modelled, but techniques allowing the modification of the DEM underlying the
visualisations would need to be developed. Unfortunately whilst this could be addressed
by linking the visualisation software to coastal evolution models, such models are often
unavailable.
Fig. 6: An example of the complex terrain and landcover found in the coastal zone.
Fig. 7: An example of the spurious features found when representing tide heights in
ArcScene.
The representation of tidal states is particularly important when creating visualisations for
low-lying sections of coastline such as the north Norfolk coast that are vulnerable to
flooding. However, representing tidal states in the visualisations was found to be
problematic because of the poor vertical and horizontal resolution of the DEMs used which
produced spurious features when representing tide heights. These spurious features were
Developing Techniques to Visualise Future Coastal Landscapes
9
particularly evident when attempting to represent tides in the ArcScene visualisations (Fig.
7). Similarly, tide heights and waves were difficult to represent in the WCS visualisations
because models that were suitable for representing offshore waves created very high waves
in tidal inlet areas. In these situations separate sea and tidal inlet polygons had to be used
to which different wave models were assigned.
The coastal zone is an area that comprises of complicated vegetation and sedimentary
surfaces (Fig. 6) that were difficult to represent in the visualisations, because whilst coastal
features have poorly defined feature boundaries, GIS data is defined using distinct areas.
As a result the distinct feature boundaries associated with GIS data caused problems when
trying to represent transition zones between features such as those associated with different
types of saltmarsh vegetation. Furthermore, this problem was compounded by WCS failing
to provide adequate facilities for blending the edges of polygons to represent changes in
surface coverage. Unfortunately, whilst this was partly a drawback with the software used
it was primarily a limitation with GIS data structures that becomes a more significant issue
when working in the coastal zone.
7 Potential Coastal Management Applications and Future
Research Needs
Whilst visualisation techniques have been investigated for the representation and
understanding of coastal processes and geomorphology (e.g. RAPER, 2000), there has been
little research investigating their use in participatory coastal zone management. To
investigate the potential management applications for the visualisations interviews were
conducted with representatives from coastal management organisations. These suggest that
the visualisations have a range of potential roles, from using them in management meetings
to develop policies for sites, to publishing them in management documents, public
meetings and exhibitions, and dissemination on the Web. A number of more advanced
applications were also proposed including the visualisation of whole sections of coastline,
the representation of long-term temporal changes, and the visualisation of historical
coastlines to aid in the presentation of why particular management interventions are
necessary.
Amongst the functionality that the coastal managers are requesting is the ability to produce
real-time updateable and interactive visualisations for use in meetings to develop
management options for sites. These would include links to scientific models and the
ability for drag and drop editing of the visualisations. However, the ability to achieve this
is presently limited by computer processing constraints which lead to long rendering times
and prevents the creation of real-time updating visualisations. The interviews also
highlighted the potential importance of interactive forms of visualisations that enable users
to explore proposed management interventions for themselves. These would allow
members of the public to answer any questions that they have about a proposed scheme and
to form their own opinions on the possible merits of it. To address this, recent research has
begun to investigate the use of real-time software packages such as TerraVista from Terrex
to produce more interactive forms of visualisation (Fig.8).
S. Jude, A. Jones, I. Bateman and J. Andrews 10
Fig. 8: An example of the interactive visualisations for the Brancaster site produced
using TerraVista.
Some of the coastal managers interviewed expressed an interest in the possibility of
visualising temporal changes in the coastline using animation to aid in the understanding of
geomorphological processes and patterns of coastline evolution. Whilst this is possible
using WCS to create a series of frames that can be animated it would be an extremely time-
consuming process because a large number of separate landcover and terrain coverages
would have to be created for each frames. However, this may be a technique that could be
used to represent coastal change over long periods where each frame in the animation
illustrates a 10 year change at the site, as this would require the use of fewer landcover and
terrain coverages to produce them.
Many of the coastal managers who were interviewed are seeking the ability to produce
visualisations of both historical and landscapes up to 100 years in the future for use in
evaluating the long-term consequences of proposed management options. With the
inherent uncertainties associated with predicting even short-term coastal changes this desire
to create visualisations over longer timescales highlights the need to develop techniques to
represent uncertainty. There are a number of possible ways in which this could be
achieved including the creation of Monte Carlo simulation type animations that present the
range of uncertainty. Alternatively visualisations representing the best and worse case
coastal change predictions or map overlays highlighting zones of uncertainty could be
used. This is an important area of future research because the uncertainties associated with
proposed management options need to be presented to members of the public if they are to
be involved in participatory decision-making processes as it would assist in enabling them
to make informed decisions.
8 Conclusions
This research highlights the potential role of GIS and visualisation techniques as a tool to
assess and visualise future coastal landscapes. It also illustrates how these technologies
may be used in the future by coastal managers to present management information to
members of the general public. However, a number of challenges face the development of
virtual coastal environments if they are to meet the needs of coastal managers.
Developing Techniques to Visualise Future Coastal Landscapes
11
9 Acknowledgements
The authors would like to thank all of the organisations who provided input into the
research including the Environment Agency, English Nature, the Norfolk Coast Area of
Outstanding Natural Beauty Partnership and the Royal Society for the Protection of Birds.
Thanks are also due to the British Geological Survey, Centre for Ecology and Hydrology,
Environment Agency, Natural Environment Research Council Airborne Remote Sensing
Facility, the Ordnance Survey and Suffolk County Council for providing GIS data. The
research was initially funded by an ESRC/NERC interdisciplinary studentship awarded to
the lead author whilst ongoing research is funded by the Tyndall Centre for Climate
Change Research. All Ordnance Survey data is © Crown Copyright Ordnance Survey. An
EDINA/JISC supplied service.
10 References
Andrews, J.E., Funnell, B.M., Bailiff, I., Boomer, I., Bristow, C. and Chroston, N.P. (2000)
The last 10,000 years on the north Norfolk coast - a message for the future? p.76-85
in Dixon, R. (ed.) Geological Society of Norfolk 50th Anniversary Jubilee Volume.
Belfoire, S. (2000) Recent developments in coastal management in the European Union.
Ocean and Coastal Management. 43. 123-135.
Clayton, K.M. (1993) Adjustment to greenhouse gas induced sea level rise on the Norfolk
Coast – a case study. p.310-321. In Warrick, R.A., Barrow, E.M. and Wigley,
T.M.L. (eds.) Climate and Sea Level Change: Observations, Projections and
Implications. Cambridge University Press. 424pp.
Hulme, M., Raper, S.C.B. and Wigley, T.M.L. (1995) An integrated framework to address
climate change (ESCAPE) and further developments of the global and regional
climate modules (MAGICC). Energy Policy. 23. 347-355.
King, G. (1999) EC Demonstration Programme on ICZM. Participation in the ICZM
processes: mechanisms and procedures needed. Hyder Consulting. March 1999.
114pp.
Ministry of Agriculture, Fisheries and Food (2000) A review of Shoreline Management
Plans1996-1999. Final Report March 2000. A report produced for the Ministry by a
consortium lead by the Universities of Newcastle and Portsmouth. Flood and
Coastal Defence with Emergencies Division Ministry of Agriculture, Fisheries and
Food. 59pp.
Raper, J. (2000a) Multidimensional Geographic Information Science. Taylor and Francis.
London. 300pp.
Shennan, I. (1989) Holocene crustal movements and sea level changes in Great Britain.
Journal of Quaternary Science. 4. 77-89.
Thumerer, T., Jones, A.P. and Brown, D. (2000) A GIS based coastal management system
for climate change associated flood risk assessment on the east coast of England.
International Journal of Geographical Information Science. 14. (3). 265-281.
Tyrrell, K. and Dixon, M. (2000) Brancaster West Marsh Engineers Report. Final Draft –
May 2000. Environment Agency. Ipswich.
top related