Page 1
D2.1.7
Final EXPERIMEDIA Methodology
2014-03-17
Magnus Eriksson (Interactive)
Peter Ljungstrand (Interactive)
Aleksandra Kuczerawy (Leuven)
www.experimedia.eu
The Final EXPERIMEDIA Methodology aims to summarize the EXPERIMEDIA
methodology as it has been used and evolved during the time of the experiments. The
deliverable outlines the lessons learned from the first open call and the driving experiments
and then continues to build on these in presenting the Final Methodology Structure that
explain the current and final approach to running experiments within EXPERIMEDIA. It
also includes updated assessments of impact of privacy (PIA) and value (VIA).
Page 2
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 2
Project acronym EXPERIMEDIA
Full title Experiments in live social and networked media experiences
Grant agreement number 287966
Funding scheme Large-scale Integrating Project (IP)
Work programme topic Objective ICT-2011.1.6 Future Internet Research and Experimentation (FIRE)
Project start date 2011-10-01
Project duration 36 months
Activity 2 Construction
Workpackage 2.1 Architecture Blueprint
Deliverable lead organisation Interactive
Authors Magnus Eriksson (Interactive)
Peter Ljungstrand (Interactive)
Aleksandra Kuczerawy (Leuven)
Reviewers Stephen C. Phillips (ITInnov)
Athanasios Voulodimos (ICCS/NTUA)
Version 1.0
Status Final
Dissemination level PU: Public
Due date PM28 (2014-01-31)
Delivery date 2014-03-17
Page 3
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 3
Table of Contents
1. Executive Summary ............................................................................................................................ 4
2. Introduction ......................................................................................................................................... 5
3. Lessons learned from 1st Open Call ................................................................................................ 6
3.1. Methodology use .......................................................................................................................... 6
3.2. User Recruitment ......................................................................................................................... 6
3.3. PIA lessons learned (Leuven)..................................................................................................... 6
4. Final Methodology Structure ............................................................................................................. 8
4.1. Defining Learning Objectives .................................................................................................... 8
4.2. Defining Structural System Model ............................................................................................ 8
4.3. Defining Human Interaction Model ......................................................................................... 9
4.4. Define QoS Model....................................................................................................................... 9
4.5. Define QoE Model ...................................................................................................................... 9
4.6. Define QoC Model (optional) ................................................................................................. 10
4.7. Define User Activation Strategy .............................................................................................. 10
4.8. Conduct Privacy Assessment ................................................................................................... 11
5. VIA 2.0 ............................................................................................................................................... 12
6. PIA 2.0 ................................................................................................................................................ 14
7. Methodology in 2nd open call ......................................................................................................... 15
7.1. Methodology in Planning .......................................................................................................... 15
7.1.1. Defining metrics .................................................................................................................. 15
7.1.2. Relating metrics ................................................................................................................... 15
7.1.3. Gathering QoE data ........................................................................................................... 16
7.1.4. Conducting a VIA ............................................................................................................... 16
7.1.5. Game Design and QoE evaluation .................................................................................. 17
7.1.6. User Activation in Schladming Experiments .................................................................. 18
8. Conclusion.......................................................................................................................................... 19
9. Appendix A: Gameplay Design Patterns for Public Games (External Document) ................ 20
10. Appendix B: Using Mobile Computing to Enhance Skiing (External Document) ................ 21
11. Appendix C: Experimenters Guide to Schladming (External Document) .............................. 22
Page 4
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 4
1. Executive Summary
The Final EXPERIMEDIA Methodology aims to summarize the EXPERIMEDIA
methodology as it has been used and evolved during the time of the experiment. Due to the fact
that experiments from the 2nd open call are still running at the time of writing this deliverable, it
will be updated at a later stage incorporating the experiences of running the second open call
experiments.
This deliverable begins with a section outlining the lessons learned from the first open call and
the driving experiments and then continues to build on these in presenting the Final
Methodology Structure that explain the current and final approach to running experiments
within EXPERIMEDIA. It then outlines the Value Impact Assessments (VIA) and Privacy
Impact Assessments (PIA) and they are structured within the second open calls. The section
following this describes how the methodology has been used in planning for the 2nd open call.
As appendixes are 3 reports that have been produced within the methodology package. These
are the game design patterns to be used by experiments that use gaming, the report on using
mobile computing to enhance skiing and the field study conducted to aid experiments operating
at the Schladming venue.
Page 5
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 5
2. Introduction
This final methodology deliverable will come in two versions; one delivered on the due date
specified in the DOW and including work done and lessons learned from the first round of open
call experiments and the methodological structure used with the 2nd call experiments, and one
updated at a later date that include a full evaluation of the methodological considerations of the
EXPERIMEDIA project.
The main section of this first version is the outlining of the final methodological structure that is
used in the planning and implementation of the 2nd open call experiments. This is a revised
structure based on the lessons learned from the first round of open call experiments and the
driving experiments conducted by the core partners of EXPERIMEDIA. This also includes the
updated VIA 2.0. A section after that describes how the new methodology has been used in the
planning of the 2nd open call experiments. Finally the deliverable includes appendixes of new
methodology work; "The Game Design Patterns", the report on "Using Mobile Computing to
Enhance Skiing", and the field study "Experimenters Guide to Schladming".
Page 6
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 6
3. Lessons learned from 1st Open Call
3.1. Methodology use The updated 2nd methodology was used both by the 1st open call partners and to a lesser extent
the driving experiments. The use of the methodology differed from experiment to experiment
depending on the different prerequisites of experiment setups and venues. The Value Impact
Assessment (VIA) turned out to be complicated to use for experimenters and was mostly used
when methodology researchers could personally guide the experimenters. While the VIA was
considered to provide good insights, it was considered to stray too far from the ECC focused
experiment process. There was a missing translation between the higher-level methodological
issues of the VIA and the day-to-day experiment process.
3.2. User Recruitment The issue of recruiting participants for experiments turned out to be more of a concern than
expected when it came to the Schladming experiments. Efforts in recruiting more participants in
the later stages of the 1st open call experiments and the Schladming driving experiments during
the MID Europe music festival in July 2013 by attempting to recruit participants on the streets
of Schladming turned out to be difficult. The experiments had trouble locating the right user
profiles (people with mobile phones but not from the Schladming region and therefor in need of
navigational assistance to find points of interests) and most people approached rejected taking
part on the grounds of not understanding properly what was expected from them. The fact that
many people are used to sales people and commercial promoters handing out flyers at festivals
are also assumed to have contributed to rejections. Some of the experiments also commented
that they had difficulties getting an overview of the stakeholders present in Schladming that
could have been of use for them in enlisting participants. This insight led to the making of the
Schladming field study attached as an appendix to this deliverable.
3.3. PIA lessons learned (Leuven) The PIA method in EXPERIMEDIA 1st Open Call was generally considered to be adequate.
Cooperation between technical core partners, 1st Open Call Experimenters and the legal partner
was considered as highly efficient. Moreover, assistance of the Ethical Advisory Board as well as
Data Protection Board in the review process proved to be very helpful.
Nevertheless some improvements were recommended to further facilitate the review process
and ensure that it is able to anticipate any legal and ethical issues within each experiment.
Specifically, the recommended changes focused mainly on improving the level of the description
of the experiments to focus less on the technical specifications and more on the legal and ethical
aspects. In result, for the second Open Call the Consortium planned two iterations of the ethical
checklist. First, the participants of the open call would fill in the checklist together with their
application. The information provided by them, however, would not be taken into account in the
selection of the experiments. The reason for such step was to make the experimenters aware, on
the very early stage, of the possible aspects that could be, legally or ethically, problematic. The
second iteration of the checklist would be delivered to the legal partner on the very mature stage
of the development of the experiments, to make sure that the reviewed version of the
Page 7
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 7
experiment is the final one. With this change the review process of the Second Open Call
Experiments would be faster and more efficient. It would also allow all the involved parties to
see the changes and progress in the development of the experiments with regard to their legal
and ethical aspects.
Another lesson learnt from the 1st Open Call is that more attention should be put on raising
awareness of the experimenters, with regard to legal and ethical issues in the experiments. In
EXPERIMEDIA assistance is offered to the partners from the legal partner (ICRI - KU
Leuven), EAB and DPB, on every stage of the project. However, we discovered that an
understanding of the legal and ethical aspects of the experiment by those who actually design and
implement it is crucial for its compliance with the applicable law and ethical principles.
These findings were taken into account in the 2nd Open Call and allowed for a more efficient
approach in the process of the legal and ethical review.
Page 8
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 8
4. Final Methodology Structure
The methodology used in the final year for the 2nd open call experiments is a more directly
experiment-oriented methodology than the 2nd methodology. The final methodology is more
integrated into the experimenters work with the ECC model and is a multi-stakeholder effort
guiding the experimenters through the process of defining and executing their experiment. The
methodology is structured, yet flexible enough to be able to be adapted to the unique
requirements of each experiment, which in EXPERIMEDIA differs widely in terms of
objectives, scope and scale. In each section of the methodology structure there are various
methodological tools available to the experimenters should they choose to use them. The
following steps are not necessarily performed in sequence and can be an iterative process that is
cycled through several times, although the steps are often dependent on the ones that come
before.
4.1. Defining Learning Objectives The experimenter first needs to define what they are trying to learn and the expected
relationships between system properties. This is based on the local conditions of the venue
whose unique conditions and needs determine the direction of the experiment. The experimenter
defines what they want to learn from the experiment and how this relates to system properties.
This is thus a process that goes from the abstract -- dealing with research questions and the
needs of the venue operation -- to the concrete practice of defining the system components that
needs to be involved in the experiment, how they are expected to behave and what they will
measure. This is then related to the QoS and QoE model that the experimenters will also define.
Experimenters come into the project with various levels of defined learning objectives. Some
Experimenters, such as Evolaris, have opted for a co-creation approach where the learning
objectives are revised together with stakeholders and users.
There are several tools from the methodology that an experimenter can choose to use at this
stage. These include: Design workshops, VIA assessment, Field Studies, Co-creation methods,
Venue partner knowledge.
4.2. Defining Structural System Model When the experimenter knows their learning objectives, they can define a structural system
model that describes the entities in their system that they are studying and their relationships.
Here the experimenters need to define what system components are going to be used and how
they will interact with each other. Contrary to the learning objectives, this is a detailed technical
description that is supposed to be able to guide the technical partners involved in implementing
the experiment. A key part of this stage is also to describe the users and their possible
interactions with the system. This then relates to defining the Human Interaction Model below.
The system entities described in this section also form the basis what is being monitored in the
ECC.
Page 9
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 9
4.3. Defining Human Interaction Model This is a user-centered part of the methodology structure where the experimenter has to define
how users are expected to interact with the system entities and with each other online and
offline. This includes both structured behavior where the users interact with the system entities
and the more unstructured behavior of users interacting with each other offline of by online
means that are not included in the EXPERIMEDIA system. This stage serves several functions.
First, it serves the function of defining the interaction with the system so that the proper
interfaces can be constructed and the proper connectivity that the user needs for a certain task
can be provided. This includes constrains put on the system by scaling up the number of users
involved in the experiment. Second, this stage serves the purpose of defining and coming up
with an interaction model that is meaningful for the participant user so that the experience of
taking part in the experiment is constructed in a way that is enjoyable, seamless and functioning
smoothly. Thirdly, this serves to define online and offline interactions that the user can perform
that cannot be captured by the EXPERIMEDIA system but could still be relevant for the
experiment evaluation. These can then be captured by other means such as qualitative methods.
To define the Human Interaction Model requires knowledge of the local context as well as of
target group needs and behaviors. It also requires the use of sound interaction design principles
in order to make the interaction with the system meaningful to the user.
Tools from methodology: Game design patterns, Field studies, and Interaction design expertise.
4.4. Define QoS Model Here is where the experimenter defines a Quality of Service model of all entities that will be used
in the experiment. This is especially important for any additional FMI technologies beyond the
baseline components. These metrics should be defined in terms of quantitative metrics measured
by system entities and reported to the ECC. What metrics should be used depends of the defined
learning objectives and entities defined in the structural system model. It is also an advantage if
the experimenter syncs the QoS measurements with the QoE measurements so that the
experimenter can relate changes in the experience of quality on behalf of the user with changes
in the system properties and vice versa.
4.5. Define QoE Model Related to the definition of a QoS model is defining the QoE model for the user experience.
This can be either qualitative or quantitative metrics reported to the ECC. EXPERIMEDIA
provides active tools for evaluating user experience such as the Babylon tools for integrated
measurements and questionnaires. The experimenter can also choose to use a passive system
property as measurement of QoE, such as the amount of times a user uses a given function or
part of the system of how long they view content. The experimenter can also use quantitative
and qualitative methods of evaluation that is not part of the ECC. An example of a quantitative
evaluation of that sort would be a multiple-choice questionnaire handed out before and/or after
an experiment session. Examples of qualitative evaluations are questionnaires with free-text
fields, interviews, participant observation, focus groups or ethnographic studies. At these
evaluations the experimenter could benefit from having access to the QoS measurements in
Page 10
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 10
order to question the user about what happened or what they experienced at the times when the
performance of the system changed (either by their actions or not).
Defining the QoE model also includes choosing methods of how, when and where to gather
metrics as to not disrupt the human interaction model. QoE measurements can be disrupted
either if the come in an inconvenient time or if the come at a time where the user is unable to
recall their experience properly. The form of QoE measurements and the contents of their
question must also be designed in such a way that they support the learning objectives already
defined.
4.6. Define QoC Model (optional) The experimenter can also include a Quality of Community model if they are interested in
measuring the interactions of users in various social networks. This needs to be properly related
to the Structural System Model and QoS measurements to that the experimenter will be able to
capture the interactions determined to indicate Quality of Community. This could also include
an evaluation of the QoC outside of the EXPERIMEDIA system either online or offline, in
which case the experimenter would have to probe about this during questionnaires and/or
interviews with the users.
4.7. Define User Activation Strategy An important but easily overlooked part of the experiment is to define the User Activation
Strategy. The object of this stage is for the experimenter to first define the target user profile and
then the user activation strategy.
Defining the target user profile includes defining what kind of users the experimenter needs for
the experiment and what effort are expected from them before, during and after the experiment
in order for the learning objectives to be met and the necessary QoS and QoE measurements to
be captured. The experimenter also needs to define how many users are needed for the
experiment in order for the evaluation to be satisfactory. Included with defining the effort of the
users is to also be clear about what venue actors that needs to be involved and what their
expected effort is. Together with defining the target profiles the experimenter can also define the
incentives available to users, both intrinsic and extrinsic.
When the target user profile is defined the experimenter must device a user activation strategy to
make sure the right kind and the right amount of users can be involved in the experiment. How
this process takes place depends a lot of the characteristics of the experiment and the venue it is
performed at. At CAR this often involves finding the right coach and athletes who are available
and motivated to take part as well as are in a sport that can make use of the experimenters
technology. Also important at CAR is to find the right technical staff that needs to support the
experiment and the right part of the venue to conduct it. At FHW this process includes finding
the right visitor groups, venue section and staff to support the experiment. It can also include
involving external experts in archeology, history of education to assist the experiment.
Schladming is somewhat different than the two other venues since the visitors to Schladming
that in most cases will make up the users are much less connected to the daily operations of the
Page 11
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 11
EXPERIMEDIA venue partner than at CAR and FHW. The potential participants in
experiments at Schladming have often come to Schladming in order to perform activities that are
not related with the experiment and must therefore be convinced to take part in it either on the
spot or through contacts with some organization. The first option can be difficult if the
requested effort of the users is unclear or too high and if the immediate incentives are unclear or
not beneficial enough. The modern citizen is used to getting all kinds of offers or requests for
their time from commercial actors, advertisers or people performing surveys all the time and can
therefore be reluctant to participate or even reject before the offer has been made. If such "cold
calls" are to be used, it is recommended that the experiment is integrated into the activity they
are already performing or are planning to perform as possible, that the incentives are very clear
and direct, and the extra effort or disturbance of their original activity is kept to a minimum. For
longer involvements and extra efforts, it is recommended that the second option is used and that
the users are approached through trusted actors already involved in the Schladming ecosystem
such as associations, hotel chains, or the tourist agency. The good news is that there are lots of
these in Schladming and that the connections between people and organizations in Schladming
are very good. An option could also be to use locals for experiments that require more effort
since their time in Schladming is not so scarce as someone who is only in town for a few days.
The experimenter could also choose to involve users with minimal effort in a first run and then
recruit users from that for a second experiment run that requires extra effort.
Tools from methodology: Venue operator’s local knowledge. User engagement experience.
4.8. Conduct Privacy Assessment The experimenter also needs to conduct a privacy assessment that can be related to all steps
above that deals with personal data or privacy issues. This assessment has consequences for how
the QoS and QoE measurements are made as well as the Human Interaction Model and User
Activation Strategy. It is also important that as a part of the User Activation Strategy and the
Human Interaction Model the privacy issues are made clear to the user as to not disincentives
them from participating.
Page 12
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 12
5. VIA 2.0
The VIA formed the centerpiece in the previous methodologies. It has now been replaced by a
methodology structure that is focused on the experiment process and the integration of the
experiment into the ECC. However, it still remains as an important tool for the experiments to
understand how to best frame and implement their experiment and understand how it fits within
the context of the venue (and beyond) and what value it could bring to the venue ecosystem
actors. The VIA 2.0 presented here in the final methodology is a trimmed down version of the
VIA that gives it a more clear experiment focus and integrates better into the experiment process
that the experimenters goes through. It can be used by the experimenters both as a tool to
initially frame their experiment both for themselves and for the venue stakeholders and as a tool
to be able to understand the future value that their innovation could have in the venue
ecosystem and therefor how it could be further developed and exploited beyond
EXPERIMEDIA.
The first aspect of the VIA is to gather with the venue stakeholder and explore what value the
experimenters technology can have within the venue ecosystem and what the scope of the
experiment should be. From this, the necessary stakeholders within the venue ecosystem and
externally that needs to be involved in the experiment or can see a value in its outcomes are
defined. The experimenter must then determine what value the experiment and the experiment
technology can have for each of these stakeholders both during and after the experiment. This
includes defining the value that the experiment technology can have for the end-user at the
venue, which can be many different kinds of values for the different functions of the technology.
The experimenter must understand what the changes are in the practices of the stakeholders
between the original state and the changed state given the introduction of the technology. A key
aspect is to also be aware of the indirect changes to the stakeholder’s everyday practices such as
how it changes relations to other stakeholders or how it changes their disposal of time
throughout the day.
The experimenter can go through this process by sitting down with the main venue stakeholder,
by going around to all relevant stakeholders, or -- if needed -- structure the process as a design
workshop where they elaborate on different aspects of the stakeholders, the scenarios that could
occur with the technology and then go through a brainstorming process to find new aspects of
how the experimental technology can influence stakeholders.
Understanding what value the experiment technology could potentially have for the end user
allows the experimenter to better structure the evaluation of the user experience of the
experiment and to better understand what impact to aim for.
A VIA is normally done in the planning phase leading up to the first experiment run, but there is
also a possibility of doing a second VIA after the first experiment run has gathered some
experiences and metrics and the experimenter wishes to make a deeper analysis before the
second run and to structure that second run differently in order to reach another kind of
evaluation. This is a recommended approach since the first experiment run will be very much of
Page 13
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 13
a learning process of using a technology in a new environment and the second run leaves more
space to explore the value impact more profoundly.
Page 14
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 14
6. PIA 2.0
In the 2nd Open Call recommendations made after the 1st Open Call were taken into account.
The improved steps were, however, further adjusted to fit the circumstances of the 2nd call.
First of all, greater attention was given to raising the awareness of the experimenters in the 2nd
Open Call, at the early stage of their participation in the project. To achieve this, an extensive
legal tutorial was organized, during the Technical Training meeting in Madrid. During this
meeting the new partners were educated on the legal and ethical aspects related to the processing
of personal data in scientific research (and in general).
Moreover, the experimenters were asked to fill in the ethical checklist twice. First, when they
were participating in the call. The intention of the core partners was to emphasize the
importance of the privacy issues in EXPERIMEDIA. Second iteration of the checklist was
submitted at the beginning of Year 3. In this instance the experimenters were assisted by the
legal partner in order to fully understand the consequences of their choices in the experiment
design. The checklists, with the detailed experiments' descriptions were later on discussed during
a meeting with the Ethical Advisory Board and Data Protection Board (December 2013). At this
meeting specific recommendations were formulated for each experiment. The result of the
discussion can be found in the 2 OC Ethics Review Report (D5.1.6). The progresses of the
experimenters in accommodating these recommendations are continuously monitored by the
legal partner ICRI KU Leuven. This institution will assist the experimenters until the end of the
project and report to the rest of the Consortium. Final Ethics Review report summarizing the
level of compliance of the EXPERIMEDIA project with the applicable legislation and ethical
principles will be submitted at the end of the project (D5.1.8).
Page 15
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 15
7. Methodology in 2nd open call
The methodology is used by the experimenters both when planning and defining their
experiments as well as when implementing them. This section will describe and evaluate how
methodology was used in both these phases of the 2nd open call experiments.
7.1. Methodology in Planning The experimenter’s use of methodology in planning their experiments is most visible in the
deliverables called "Experiment problem statement and requirement". This is the first deliverable
the open call experimenters present and is their way of coming to understand the
EXPERIMEDIA project and how their experiment fit in its systems. It is not a methodology
deliverable per se and has its own structure so the methodology structure outlined above can
only be a guide and checklist for the experimenters to follow that in the end has to be interpreted
according to the requirements of this specific deliverable.
Experiments differed in how the methodology was implemented. This is natural since some
experiments have to place more emphasis on the systems architecture while for other the user
experience is central to their methodology.
All experimenters have clearly defined learning objectives, however some have defined them as
research questions in need of operationalization and some as technical learning objectives of
system behaviors.
7.1.1. Defining metrics The level of detail of which metrics have been defined varies across experiments. This is
especially true of user experience-related metrics and has to do with the varying level of
complexity in those questions across the experiments. Some are content with defining a general
level of satisfaction or acceptance of the system from the users while others are trying to answer
complex questions of user satisfaction and the value of the technology to the user and therefor
has to be more specific. No full list of questionnaires has been provided by any experiment.
When metrics have been unable to be provided, either due to the fact that they can't be fully
defined before the experiment is set up or because initial experiments are needed to pin the
needed metrics down, the experimenter has been recommended to provide a roadmap of how
the metrics will be arrived at and what purpose they will serve in reaching the learning objectives.
7.1.2. Relating metrics Carviren by Realtrack Systems performed at CAR is an example of an experiment that chose to
put emphasis on how metrics where related to each other. First they group the QoS they are
going to use in relation to their learning objectives in groups such as requirements and usage that
help them evaluate different objectives. Then they relate specific QoS measurements to the
corresponding QoE measurement. An example of this is the measurement of real-time. A
measurement of 10 seconds delay in a video stream to a remote coach would not technically be
considered real-time, but a QoE questionnaire or interview with a coach taking part in the
experiment might consider the interaction dynamics to be very satisfactory and have no problem
Page 16
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 16
with the delay. By relating these two metrics to each other, the experiment can find out at what
QoS measurement the interaction dynamics impacting the user experience is disturbed and
therefor find the acceptable level of QoS for this particular QoE in this particular situation.
7.1.3. Gathering QoE data Special emphasis was placed in the 2nd open call experiments on the practicalities of the process
of gathering QoE to ensure that this process was able to capture all moments of instant and
post-experiment feedback from the participants at the same time as it was made as unobtrusive
as possible on the experience of participating in the experiment.
The gathering of QoE data requiring active input from the participants was divided into one
category of instant feedback taking place while the participant is taking part in the experiment and
post-experiment evaluation that happens after the participant is done with the experiment. The
instant feedback evaluations are primarily done through the Babylon component and are
integrated into the experience of participating as much as possible. The task for gathering instant
feedback is both to determine at what moment in the experiment this evaluation should take
place, if it is triggered actively by the participant or if it emerges passively thorough monitoring
the state of the participation, and how it can be conducted in such a way that it is simple enough
to require little interruption of the user experience while still providing the experimenters with
enough data. The most minimal approach is taken by SmartSkiGoggles who are experimenting
with letting the participant “ping” the experimenters at a particular time and place of their choice
when they feel like they want to comment on their experience and the experimenters can
synchronize these time stamped pings with quantitative data from the ECC and ask follow up
questions to the participants about what it was the ping was about. Other experiments are
implementing the Babylon client to trigger an instant feedback evaluation at specific moments in
the user experience such as right after a certain task has been completed.
The post-experiment evaluation will be conducted with digital evaluation tools for most
experiments in the 2nd open call. The exception is PlayHist that will have too many participants
evaluating at the same time to provide digital devices to all of them. The benefits with using
digital evaluation tools are that the results of the evaluation can be summarized immediately and
the experimenter can choose to ask the participants follow up questions on any unexpected
results of the evaluation that they see. For the experiments that need it, EXPERIMEDIA will
provide a self-hosted solution for creating, conducting and evaluating digital questionnaires.
7.1.4. Conducting a VIA One partner decided to conduct a VIA and this was the PlayHist experiment by Tecnalia at
FHW. This VIA implementation is based on the original VIA from the 2nd methodology but
since it is a small experiment it is only based on the first stage of the VIA which is about
outlining opportunities, risk and values. This is similar to the emphasis of VIA 2.0. Tecnalia
examines the impact of their experiment for visitors, venue management, museum staff and
content providers. Tecnalia also uses the VIA to frame their EXPERIMEDIA experiment in a
long-term perspective by framing it as a "Value Opportunity Phase" that will be followed by
further larger-scale testing at FHW after EXPERIMEDIA to continue the VIA.
Page 17
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 17
There is an opportunity for experimenters to use the first run of their experiments to gather
initial user experience data but focus mainly on the technical performance of their experiment
technologies. Then they can use the 2nd run to go more deeply into the quality of experience
issues. It is recommended by the methodology that they conduct a VIA 2.0 before this second
run in order to make the assumptions about the needs and matters of concerns of user and other
stakeholders more explicit and to better be able to articulate how their experiment and their
technology has the potential to make an impact. The first experiment run should provide the
experimenters with enough experience and know-how of the venue ecosystem in order to
conduct a reliable VIA before the second run.
7.1.5. Game Design and QoE evaluation The PlayHist experiment is the 2nd open call experiment that will make extensive use of game
design. Gaming is both the method by which the experiment will be conducted and what will be
evaluated from a QoS, QoE and QoL (Quality of Learning) perspective. This requires some
special considerations for the methodology of the evaluation. The goal of the experiment is to
evaluate the use of gaming for learning in an environment such as FHW. A risk with using games
in research projects is that the participants will evaluate the game itself and not the learning
situation. The state-of-the-art games that the participants are used to playing takes years to
develop by sometimes hundreds of employers and large budgets, so a small game developed for
research purposes can’t match these in terms of graphics and immersion. Instead, the game has
to be designed in a way that makes it clear that the learning is the key component and the game
is only there to enhance it.
PlayHist has the opportunity of using the game both in an evaluation phase and as a tool for
learning. In other words, the game can be structured to test knowledge acquired for example by a
puzzle game element that requires the user to have previous knowledge acquired throughout the
game, but it can also be used to enhance the focus on learning for example by structuring
learning elements as missions to be accomplished and knowledge components being spread out
through this mission. The game is then used in the process of acquiring knowledge, based on the
assumption that the immersion, involvement and enjoyment the game brings will improve the
Quality of Learning.
In the end it will be the learning objectives from the experts that are engaged in developing the
experiment that will determine the most suitable approach. As a help, PlayHist has the game
design report attached in this deliverable and the game design expertise of Interactive Institute.
The game mechanics can also be used as experiment variables that are changed throughout the
experiment and the different modes are evaluated. The experimenters can choose to use the
game mechanics as experiment variables to test different experiment setups. For example the
experimenter can use the game mechanics to experiment with cooperative versus competitive
learning models, individual and team-based learning, as well as with different levels of immersion
and intensity of the gaming experience. In these shifts happen within a single gaming experience,
the experimenters can also choose to use integrated instant feedback evaluation to capture the
moods of the participants at different stages of the gaming experience. However, there is a
Page 18
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 18
balance to be struck between capturing this evaluation and disrupting the immersive gaming
experience with abruption for evaluation.
7.1.6. User Activation in Schladming Experiments From the experiences of the driving experiments and the 1st open call experiments, the
consortium has concluded that activating user participants at the Schladming venue requires a
more structured approach due to its character as an open environment and due to the
constraints that the visitors to Schladming often have in choosing to dispose their limited time
there.
The approaches taken during the 2nd open call experiments are on the one hand of using
participants from already established communities and on the other hand using a structured
recruiting process when individual users are required.
The first approach can be seen in the iCaCoT experiment that will collaborate with skiing
coaches and their students in the first and second experiment runs. The advantage is that this is a
group that is already established in Schladming and have structured activities that can be
integrated with the experiment. For example, the ski students will perform their classes as usual
and only the coaches are actively involved in the experiment.
The second approach is used when larger numbers of individual users are needed and here the
recruitment will primarily happen through the already established living lab that Evolaris has
been using for previous user tests. This consists of a database of suitable potential participants
that have volunteered to be in this database to be called up for experiments from time to time.
This enables the experiments to approach participants with a willingness to take part in
experiments, participants that fit the user profile of the experiment, and users who can plan their
time in advance in order to take part.
Page 19
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 19
8. Conclusion
Building on the lessons learned from the 1st open call experiments and the driving experiments,
the EXPERIMEDIA methodology was modified in the final year to make use of a more
structured and down to earth approached that integrated the work of that the experimenters
have to do with integrating their experiment into ECC and other EXPERIMEDIA components
with the methodological work that has to do with defining learning objectives and defining the
experimental metrics needed as well as their approach to user activation. This approach has
made the work of the open call experimenters a more smooth and integrated process, which is
needed when they face the challenge of understanding the EXPERIMEDIA components and
the EXPERIMEDIA approach in a short time-frame while at the same time facing both the
technical and methodological problem of setting up and conducting the experiment.
By building on the iterative trial and error of the experimental approach of EXPERIMEDIA, the
project has managed to develop a methodology for conducting experiments in Future Internet
use that involves live venues and real users in open environments, that is able to guide an
experimenter through a delicate process and to make them aware of all the concerns that they
might not be used to if they either not come from a research background or are used to doing
technical research in controlled lab environments. This methodology can serve as a model for
further research into the Future Media Internet that investigates both technical requirements as
well as the quality of user experience in the real lives of participants and venue stakeholders.
Page 20
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 20
9. Appendix A: Gameplay Design Patterns for Public Games
(External Document)
Page 21
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 21
10. Appendix B: Using Mobile Computing to Enhance Skiing
(External Document)
Page 22
EXPERIMEDIA Dissemination level: PU
© Copyright The Interactive Institute and other members of the EXPERIMEDIA consortium 2014 22
11. Appendix C: Experimenters Guide to Schladming
(External Document)