7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
1/25
This document defines the support procedures for users of the EXPERIMEDIA facility. A
general process defining a framework which integrates with each step of the experiment
lifecycle; considering the specificity of each domain; experiment design, usage of
EXPERIMEDIA facility, deployment of the experiment and reporting.
D3.2.1
EXPERIMEDIA Support Procedures
2012-10-15
Rmi Francard, Marie-Hlne Gabrile (FDF)
www.experimedia.eu
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
2/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 1
Project acronym EXPERIMEDIA
Full title Experiments in live social and networked media experiences
Grant agreement number 287966
Funding scheme Large-scale Integrating Project (IP)
Work programme topic Objective ICT-2011.1.6 Future Internet Research andExperimentation (FIRE)
Project start date 2011-10-01
Project duration 36 months
Activity3 Operations
Workpackage 3.2 Experiment Support
Deliverable lead organisation FDF
Authors Rmi Francard, Marie-Hlne Gabrile (FDF)
Reviewers Diego Esteban (ATOS), Stefan (INFONOVA)
Version 1.0
Status Final
Dissemination level PU: Public
Due date PM9 (2012-06-30)
Delivery date 2012-10-15
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
3/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 2
Table of Contents
1. Executive summary ............................................................................................................................ 4
1.1. Scope ........................................................................................................................................... 4
1.2. Audience ..................................................................................................................................... 4
1.3. Summary ..................................................................................................................................... 4
2. Introduction ........................................................................................................................................ 5
2.1. Experimenter contributor of the EXPERIMEDIA project ............................................... 5
2.1.1. An Experiment is a project .................................................................................................. 5
2.1.2. Support domains and actors ................................................................................................ 6
2.1.3. Experiment critical actors .................................................................................................... 7
2.2. Experiment overview key assumptions ................................................................................. 8
2.2.1. Responsibilities and roles of the facilities operation ........................................................ 8
2.2.2. Operational review ................................................................................................................ 8
2.2.3. Troubleshooting and escalation .......................................................................................... 8
2.3. The critical success factors of support procedure ................................................................ 9
2.3.1. During design phase VIA and PIA .................................................................................... 9
2.3.2. Data collection....................................................................................................................... 9
2.3.3. Responsibility assignment matrix ....................................................................................... 9
2.3.4. Continuous improvement .................................................................................................... 9
3. A typical EXPERIMEDIA experiment life cycle ........................................................................ 11
3.1. Support domains at the design stage .................................................................................... 11
3.1.1. Value impact domain .......................................................................................................... 11
3.1.2. Privacy impact domain ....................................................................................................... 11
3.1.3. Technical asset domain ...................................................................................................... 11
3.2. Supporting the EXPERIMEDIA meta-components ........................................................ 123.2.1. The experiment component domain................................................................................ 12
3.3. Deployment at target venues ................................................................................................. 13
3.3.1. Provision services................................................................................................................ 13
3.3.2. Activate services and gating ............................................................................................... 13
3.3.3. Specific tool integration ..................................................................................................... 13
3.4. Support publication of the results ........................................................................................ 13
3.4.1. Meta-component performance analysis ........................................................................... 13
3.4.2. Real-time and historical report .......................................................................................... 14
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
4/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 3
4. Support procedures baseline ........................................................................................................... 15
4.1. The core flow ........................................................................................................................... 15
4.1.1. Generation ........................................................................................................................... 15
4.1.2. Assignment .......................................................................................................................... 15
4.1.3. Prioritization / escalation .................................................................................................. 15
4.1.4. Allocation / execution ....................................................................................................... 15
4.1.5. Updating / evaluating / gating ......................................................................................... 16
4.2. Severity impact and domain definition ................................................................................ 16
4.3. Prioritization ............................................................................................................................ 16
4.4. Schedule and impact assessment........................................................................................... 17
4.5. Main actors ............................................................................................................................... 17
4.5.1. Experimenters ..................................................................................................................... 17
4.5.2. Experiment's coordinator .................................................................................................. 18
4.5.3. Agent technical domain experts level 2 ........................................................................ 18
4.5.4. Agent technical domain expert level 1 ......................................................................... 18
4.5.5. End user communities ....................................................................................................... 18
4.5.6. Support core team ............................................................................................................... 18
4.6. Procedure key tasks................................................................................................................. 19
4.7. The statement of work ........................................................................................................... 19
4.8. Support system assessment .................................................................................................... 19
4.9. The support service management ......................................................................................... 20
4.10. Trouble shooting and escalation ........................................................................................... 21
4.11. Role and responsibility matrix assignment .......................................................................... 22
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
5/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 4
1. Executive summary1.1. Scope
This document presents the support procedure as a framework which objectives are two fold;
having a generic approach capable to integrate the novelties of the development of the projectEXPERIMEDIA and associated domains while being effective to support the experimenters.
This framework is founded on a core issue tracking procedure associated with role and
responsibilities definitions to manage the overall support process. Its main objective is to
simplify the use of the facility and to set the principle of assistance for the different actors at
each phase of the experiment.
The technical support is a generic procedure; the core flow of issue management specific to each
asset venue and meta-component usage. In a second step, the installation of the support
procedure for each experiment is to start at experimentation kick off by training and a full review
gathering the key stakeholders of the experiment.
The document does not cover the details of the installation for the support procedure and its
necessary associated documents. The documents and the installation of the support will be
detailed in the training deliverable. Overall the installation requires an assessment which must be
conducted at the early stage of the experiment. This approach is necessary in order to build the
support at the right pace than the experimenter developing needs with EXPERIMEDIA
evolutions.
1.2. AudienceThis document is intended first for the experimenters, EXPERIMEDIA and venue coordinator.
1.3. SummaryWe review the different hypothesis from a global point of view to the detailed roles and
responsibilities of each actor who will be part of the support as customer or provider of it. The
strong hypotheses are listed and reviewed, their objectives are to isolate the common patterns of
any experiment and pin-point the best practices to be collected and shared with current or future
experimenters. Then the actors of the experiment and their associated responsibilities are
reviewed while the venues specificity and their impact are discussed.
The document is covering at best the current known domains of support with the objective of
the planned experiment. In order to allow the process self-improvement, a document stating the
expectation and following up the experimentation is assessing the risk of failure or quality issues
with its associated impact. The history of the resolution for each incident all along the
experiment lifecycle from Design to Closing is also collected and reviewed at the end of the
experimentation life cycle to improve the overall support procedure.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
6/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 5
2. Introduction2.1. Experimenter contributor of the EXPERIMEDIA project
What is an experimentation support and which are its critical domains essential to the
experimentation? The experimentation definition and objectives are exposed here, within theProject context to set the scope of the technical support.
The objective is twofold:
1) to provide best practices collection to experimenters and the overall project,2) to implement a practical flow to support the experiment objectives.
2.1.1. An Experiment is a projectThe experiment requires the coordination of different technologies, the orchestration of the
different venue's assets while ensuring reaching the goal and metrics set by the experimenter.This challenging objective can only be accomplished with the coordination of the different
contributors and actors at each step of the progress of the experiment. The operation support
will have to be reactive and capable to cover different technical domains. As a matter of fact we
need to make sure that objectives are properly set and understood so the operations can deliver
at the expected level. In particular: at the early stage of the experiment, a statement of work is
written to collect the details of the following subjects:
The Objectives of the Experiment which must be set qualitatively, quantitatively andreviewed.
The Provided Asset from the Venue which must be reviewed as to ensure:o The necessity of observation and interaction during a defined period of time,o The usability of the facility throughout the progress of the experiment,o The Reliability of the services during the experiment ando The Scalability of the service and means to set up at different phase of the
deployment.
o The Experiment life cycle which must follow EXPERIMEDIA project managementat each step. This includes the planning and provisioning of the infrastructure of the
venue, then the Designing of the Experiment which requires the decomposition ofthe Services into capabilities assessing Venue's constraints. As a major critical steps
the Orchestration of the Integration to its final deployment. And the final conclusion
of the Analysis of the data collected and the final report of the experimentation.
The specificity of the eco-system that requires dedicated approaches to engage localorganisation from the venue to support of Experimenter's specific requirements.
For instance what access might be necessary, what allowance or control of security of thepremises should be requested. As such the experimentation might have to adapt to the
procedures or rules from the venues regarding the usage of their infrastructure.
As a consequence of unknown specifications of the Venues a document will allow toasses in detail the Risk at different levels and be refined along the EXPERIMEDIA
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
7/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 6
project development; experiments after experiment. As the overall project will benefit
from gathering all this information to improve step by step the experimentation.
2.1.2. Support domains and actorsAs presented before there are multiple domains to consider for the support to be effective. Most
of them require at least the basic knowledge in the domain while some others will require sound
expertise. Each experiment will extend the support from partners to the local actors and experts
within the facilities. A list of actors as known today is the key to understand who should be
involved and how upon their motivations. We can define three kinds of actors:
The End user must be supported to ensure the success of the experiment; the supportmust build the baseline of the most common usage QoE. A link between experimenter
and the End user must be developed when setting the business metrics according to the
VIA. At that time a panel is identified and gathered within the participant of the
experiments. This is the opportunity to involve a sub group of end users who can be partof the escalation process, they will be able to provide some help to diagnose and validate
the issue.
The Experimenterswho are thedriving the success of the project. They have to be ableto monitor and adjust the experiment when necessary. In particular they can close
incidents which could compromise the experimentation and established QoS/QoE.
To do so, at each phase of the project a specific domain to support the experimenter
must be covered:
o During the Experimentation Design: Asset Meta-component integration and usagethrough Q&A, referenced guideline and a person from EXPERIMEDIA partner'sspecialist of the concerned domain.
o During the VIA analysis: In order to build the KPIs and associated econometrics,experimenter may have to develop specific elements to access necessary metrics.
o During PIA analysis, so to support the public privacy domain, the experimenter mustunderstand potential constraints and be able to scope the experimentation
accordingly.
o During Field trials as the Meta-component will vary in maturity and may require anadjustment.
The Venue which is hosting the experiment and as such must agree to the experiment'sobjectives. While he is sponsoring the experimentation, he will be involve in key decision
regarding the experimentation and as such must be informed of the experimentation's
issues. The Venue is providing visibility and the necessary technical coordination of the
asset provisioned for the experiment. To operate the asset in coordination with the
experimentation, the Venue may require technical support. This support will help the
Venue's management team to decide the course of action to help the experimenter to
close an issue.
After first experimentation we can we can envision real operations , and then the Support willthen have to cover :
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
8/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 7
A Network Operations Centre (NOC) assessing the Integration of the different providerand their associate SLA
Business Support System from the Main Operator Engineering support and/or analysis of more specific domains, such as
o Meta-component functionalities and newer developmento Adapted Engineering Capability to qualify the Venue asset
2.1.3. Experiment critical actorsAmong all the actors participating to the experiment it is key to focus on the one for which
resources are critical:
The Venue as key different interfaces and functions to help the experimentero As described in 3.1.2 General facility operation Section 3." Every venue will set a
person to be permanent in contact with experimenters." This person is playing a key
role to provide transparency of the experimentation needs and coordinate the
escalation of problems in the organisation.
o Local business eco system coordinator, who has a role to govern the experiment withlocal business actors, could be the representative of the local tourism office.
The Experimentero The experimenter must define a representative of his organisation to coordinate and
be the main interface in the support.
o At our early stage the main effort of the experimenter is focused on the integrationof its technology with EXPERIMEDIA's Meta-components. A technical expert or
architect of the experimenter's solution for instance should be considered to interface
with technical team of the partners.
The end user / community of userso The end user's panel could also be involved to be Beta tester (in a more restricted
access to functionalities) in order to give feedbacks to the tester at early stages of
deployment.
EXPERIMEDIA's coordinator of the experimento The Experiment is a project and as such requires local coordination for the project, a
person of the EXPERIMEDIA consortium is chosen as the referee for the
experiment.
o He coordinates and checks the satisfaction of the different actors.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
9/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 8
2.2. Experiment overview key assumptionsIn order to conduct the experiment accordingly to the defined objectives, the support must fulfil
its task at the right pace with the facility capabilities. Still there will be many unknown factors
related to experiment and despite that we should define the support of an ideal experimentation.
2.2.1. Responsibilities and roles of the facilities operationDuring the operation The Facility is in charge of the SLA integration between technologies
capabilities and experimenter objectives. In regards of those SLA and Experimenter objectives,
QoE objectives shall be defined as to measure the end user satisfaction. Based upon SLA QoE
and QoS the Facility must install and deploy the meta-components which will be used in the
experiment following the agenda of the experimentation.
As such a facility coordinator is named who will manage the contracted SLA with the different
asset provider and clearly keep the goal and expectation of the experimenter within the venue
constraints. For instance, as stated in 3.1.2 Section 4. Negotiation of facility access must be donein agreement with venues and experiment so that machines, servers, cameras or additional
equipment and resources that are not present in the venue can be operated. The facility
coordinator will make sure that the basics rules are followed and objectives of operations are
met. As a result he must be involved in the escalation process every time an issue will
compromise the results and objectives of the facility operations.
2.2.2. Operational reviewDaily or weekly review will be set accordingly to the experiment length. The review will have to
be set with the coordinator with local venue availability. The objective is to report the
troubleshooting during the different phases and making sure that the issues are followed up until
completion and correction.
A dashboard is set with the KPI list and reviewed during the operational support review. This is
obviously possible when data can effectively be assessed on the right time period and determined
as leading indicators of the experiment objectives. These elements will be defined in the
statement of work document, and finalized during the VIA assessment.
2.2.3. Troubleshooting and escalationThe experimentation requires a complex assembly of sub-systems and software, and by nature itmay reveal weaknesses of first implementations. The troubleshooting process is proposed to
resolve the technical problem. A core team is driving the process and insuring the follow up of
the resolution. The core team will be able to direct the issue to the proper technical specialist and
analyse, solve or escalate.
The core team shall track the administration of the resolution of the issue, from its declaration to
its diagnosis and resolution. It is intended to measure the response time and cross check the gap
with the SLA objectives. This analysis can be also reviewed at the closing of the experiment to
build recommendation for next experimentation and improve the overall support process.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
10/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 9
The specificity of the different tools used for the experiment (mostly open source) is a potential
risk of preventing any escalation. It is important that this kind of issue be cornered by test and
qualified by failure mode analysis at the early step to avoid a blocking issue during deployment
phases.
2.3. The critical success factors of support procedure2.3.1. During design phase VIA and PIA
The PIA implementation as a first stepping stone of the methodology should be the opportunity
to corner the needs for specific ethical support. The Ethical boundaries and responsibilities
must have been clearly set (at the boundaries between venue / experimenter / end users
communities) so that there are no further needs of any judicial support during the experiment.
The VIA implementation will help to determine the econometrics and key indicators that should
be monitor to prove the business model sustainability. The overall experiment must set thepriorities in terms of key econometrics that will allow conclusive observation regarding
sustainability. The support system must insure that these priorities will be met during the
experimentation.
2.3.2. Data collectionCollecting data is the keystone of the functions necessary to manage the experiment. This service
is central to all the experiment. Therefore the support system should consider data collection as
one of the core service of the experiment.
2.3.3. Responsibility assignment matrixThe support procedure will define responsibility and proposes assignment to the different actors.
This matrix is to be fine-tuned at the assessment of the venue facility, on the early step of the
design of the experiment.
Once an incident requires support, a certain number of tasks, and the associated roles and
responsibilities should be assigned. The main objective is to define clearly who should do what in
the team to answer in the most efficient and timely way.
2.3.4. Continuous improvementOverall EXPERIMEDIA is an evolutionary project where each experiment validates and
improves methodology and collect best practices.
The main idea is to make solving common problems quick and simple. The process view is that
as a technician enters a new ticket for an incident, the application will present them with
previous tickets/solutions which may be related and be able to present a quick answer. This way,
knowledge is not lost and can be easily leveraged even with new experimenters and or new
partners.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
11/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 10
At an early step we shall define a repository as a baseline for each experiment to collect its best
practices including the operational support procedures and as to enable future operational
excellence.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
12/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 11
3. A typical EXPERIMEDIA experiment life cycleEach experiment follows the same process and completes at the early stage a risk assessment
before implementation of the experiment. The experimenter is assisted by a core support team
responsible to fulfil the objectives of the SLA and QoE. Each experiment will have a core teamwho has defined the SLA and QoE. Based upon on SLA and QoE we should define the
procedure support to achieve these objectives.
3.1. Support domains at the design stage3.1.1. Value impact domain
The early phases of the experiment are the opportunity to corner the needs of the experimenter
and frame the expectation of the venue. At that stage the assessment and design of the support
system can start with a risk assessment. The assessment of the impact of an incident during the
experiment is a sound exercise to do with the experimenter and representative or key stakeholder
of the venue. At that stage the experiment scenario must be reviewed from an incident
perspective. "What if?" scenario is to be conducted in an open session with the actors to
understand and negotiate the achievable service level. The recommendation will depend on the
targeted audience of the service will be set with the experimenter.
3.1.2. Privacy impact domainThe Privacy Impact Assessment is one of the key concerns for the end users. Personal
information may be collected during the experiment. The assessment of privacy must have an
item concerning the support domain responsibility as to validate if needed the constraints of thesupport during incident resolution.
3.1.3. Technical asset domainThe components in the experiment are from different sources; open source; products from
partners; products form labs and universities, and have different level of maturity (as explained in
the test and validation document they are associated with a certain level of maturity). Some parts
are from the venue's infrastructure others from the partners. As a result the different domains
which should be covered in the support could be vast and to some extend unreachable for a
single person, more over in the case the components are from open source origin. This last
specific case might end up with an impossibility to resolve any issue by a lack of expertise and/orthe limitation of the resolution.
During the design phase and before deployment it is necessary to conduct the assessment of
each technical domain necessary to support the experimentation. A list of the different
component associated with their respective domain should be provided at the end of the design
phase so to prepare and verify the possible level of support with appropriate planning of expert
contact and availability. The technical assessment should be done so that it can deliver a detailed
technical asset-domain check list by covering:
Each tools chosen to operate the EXPERIMEDIA's facility
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
13/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 12
The quality content production management support integration, mostly the domainconcerning transmission and the domain concerning reproduction of the content
How the risk assessment of below level SLA must be conducted The test and qualification procedure of the facilities accordingly to the different
indicators:
o Quality of Service (QoS),o Quality of Community (QoC)o Quality of Experience (QoE).
The overall EXPERIMEDIA project covers different and complex assembly of technical
developments. Each experiment is a system of systems designed with different technical asset.
The inventory of technical domain at the early stage of the experimentation will be critical to
insure that the support procedure will be effective. A missing knowledge in one of the technical
domain will increase the risk of a failure of the facility operation putting at risk the
experimentation objectives. The facility coordinator will have to make sure that this kind of risk
has been clearly identified and mitigate if possible at the beginning of the experimentation
design.
3.2. Supporting the EXPERIMEDIA meta-componentsThe meta-components are the pillar of the experimentation. Once integrated and tested they
should not be at the root of each incident. It is also the sources of all the technical domains
which must be covered to support the experimenter.
3.2.1. The experiment component domainAt the early step of the facility a document should state the different impact of failure of each
meta-component. This failure mode assessment would help to anticipate the needs of support
for the experiment. It would be the baseline clarifying the maturity and state the impact on the
experiment regarding QoE and QoS for the end user and the venue. This document will allow
the support to assess the achievable support and determine clearly the domains that can be
covered. For instance:
The 3D internet tools and services, The augmented reality tools and services The user generated content (UGC) technologies, The streaming and social networking API The tracking of the location of people and things in real-time The custom-developed sensor-based user interfaces The sound-based navigation in a real-world environment The live video streaming
A check list of all component technology and attached domain is reviewed at the stage of the
experiment design. This check list allows the support to identify exhaustively all the necessary
support domains. As stated before, a missing item of this list would result in a loss of capacity ofsupport and as a result would impact the SLA, QoS, QoE and QoC.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
14/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 13
3.3. Deployment at target venues3.3.1. Provision services
At that stage the support will insure that the infrastructure and necessary technology support will
be available at the date of the experiment. A mission statement scope the experiment and thesupport assessment as discussed earlier will be the reference to book and review the required
expertise domain. The Sequence of deployment; partial (sub-system or component) test;
integration test; test on the field; go live; will allow a smooth progression of the requested
support as to ask to postpone if necessary.
3.3.2. Activate services and gatingExternal condition will impact in certain cases the need of support and capability of operations.
For instance climate may requires the postponing the launch of the services, in particular in
Schladming.
In the same way an identified risk may involve a specific domain expertise or availability of an
asset. In this case the decision to activate the services will be done with the consent of a core
team. Many decisions during the experimentation will impact the resources usage for the
support.
3.3.3. Specific tool integrationFor instance, Infonovas solution through a web service interface will involve specific domains
of knowledge. We expect that once identified during the assessment of the meta-component
which will be used, each specific domain of support will be associated to a person.
We propose to fill up a skill matrix that will be used during each experiment so that the support
can effectively re-root expert support to a designated person.
3.4. Support publication of the results3.4.1. Meta-component performance analysis
Although collecting data might raise issues of privacy information must be collected to
understand how the experiment is performing and give a baseline to a business model
assessment. This limitation must be clearly understood and manage at early stage of the project
thanks to the PIA. Mainly the information which will make sense will be toward the objective of
reporting and assessing the business potential of the experiment. As a result to support this
objective:
The SLA of the services will allow QoS agreements to be understood betweenexperimenters and test bed providers. They will provide accountability and efficiency of
test bed operations and validate the business model feasibility.
Analysis will be reported to knowledge best practices database, the core team will reportat the end of the experiment the actual results on particular metrics vs. expected taking
into account the performances and limitations of the meta-component.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
15/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 14
3.4.2. Real-time and historical reportThe experimented services must be capable to provide measurement analyse accordingly to their
VIA and PIA. An operational review will diagnose drift or compliance issue during the
experiment deployment and activity. In case of drift ticket could be assigned so to support to
react effectively either as a preventive action or maintenance task, before the incident occurs.
In the same way at the end of the experiment the history of incident and support related is a key
to maintain a consistent view of the experiment for all the actors. Finally EXPERIMEDIA or
the venue may have certain legal obligation. In this case the log of the different incident could be
supportive data to resolve potential conflict between the parties.
The closure of the incident must be communicated at least to the originator and depending on
its severity should indicate the implementation of the resolution. This report can be used later
on to improve the service level and ease the reconciliation of the metrics with the different event
which occurred during the experiment.
Finally the severity level of the incident determines also who should be aware of its closure in
priority. This will be detailed during the experiment support assessment. Any person involved in
the experiment should be able to receive the messages of closure and have access to the
associated report. However for operational purposes, and avoid unnecessary communications an
list of all the incident is maintained by the support core team who should publish on a regular
basis a global report. This could be available to anyone who is interested by simple consultation.
Final report will recap different resolution and incident should be gathered in an historical
database which would help to facilitate further issues and improve the facility operations.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
16/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 15
4. Support procedures baselineThis chapter proposes to apply the generic model explained in the previous section to the
different planned venues. It relates the specificity of each venue in the detail which could modify
the core model of the support activities.
4.1. The core flowThe core flow defines states that are basically implemented in support procedure. The state
should be managed either by a specific tool fitting the experiment either through the process of
support itself. As there is no one-fits-all solution the decision will be taken with the core team
accordingly to fit the experiment objectives. It is outlined here as a reference and a baseline to
our own support operations basic communication exchanges.
There is no specific implementation proposed at this stage. The use of a specific or generic tool
could satisfy the basic requirement of given below. As main principle to follow the incident fromits generation to its closure, a state is defined.
4.1.1. GenerationCreate the ticket with description of the issue at that time domain should be seen as a first
tentative
4.1.2. AssignmentAssign the responsibility to the one who will follow the incident. It should be usually the
originator reporting the incident, but for practical reason on the domain the assignment wouldcertainly be someone of the support core team.
4.1.3. Prioritization / escalationThis attribute will condition the resources and schedule of the resolution. We will define
different level of priority regarding the seriousness of the incident and/or how it affect the
objectives needs agreed with experimenters and venues. The escalation will also be considered
accordingly to the needs (see below).
4.1.4. Allocation / executionTo insure that the incident will be treated with technical assistance, someone of level 1 or level 2with the appropriate domain is effectively allocated. The allocation is also necessary to take care
about availability of the specialist so to manage the planning of the some domain expert. Even
though it could be a permanent person to answer on a specific domain, his allocation should be
measured in order to understand the trends and forecast more resources or extra domains
expert.
Follow up on execution of the task by "ticket updating". If the execution cannot be done in time
the incident will have to go further step of updating.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
17/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 16
4.1.5. Updating / evaluating / gatingDepending on the evolution the updating will have immediate impact on current course of
action. It may imply a postponed resolution of the problem because of the need of a particular
diagnosis and require further assignment. In both case the time to resolve must be monitored.
Each task once the level 1 support is engaged in the process of escalation is evaluated through a
gating to decide to close or to pursue the core flow. This update of the incident ticket is done by
the actor in charge of its resolution.
4.2. Severity impact and domain definitionWe assigned to any incident a severity level accordingly to its impact on the experiment. The
severity level will allow EXPERIMEDIA and the venue to determine direction to improve the
facilities.
The severity level one incident impacts the operations of the experiment of the critical systemoperations, network or key application or is also an imminent problem which will result in the
incapacity to continue the experiment. Total loss of service on the following key systems or
applications will be within the scope of a severity one incident: major part of the venue asset;
content management system; domains which will be define at early stage of the experiment.
These domains will be identified as clearly impacting the objectives of the experimenter.
At the next level the severity two level incident may result in a partial impact on the end user. We
consider for instance that a simple degradation of some part of the service on key system
components, applications, and network connectivity should be considered as severity two level
incidents.
Finally the severity three level is concerned by the minor functions of the system. These
functions are not considered as main objectives for the experiment are unavailable, unusable or if
the system performance is below minimum load requirements. Raising a concern of a severity
three level incident should be seen as a simple notification.
For instance minor deficiencies with minimal impact to end-users, such as cosmetic problems,
that do not interfere with job performance, should be considered as level three. In this case
workarounds exist and should be acceptable; the service recovers its own.
4.3. PrioritizationAll incidents do not require the same course of action for resolution. Moreover we may have
access to a limited number of resources. Therefore it is necessary to prioritize when possible the
resolution of an incident.
Obviously the severity level of the incident should be the main driver to set the priority of
resolution, in particular for level one severity. However the most common incident could
certainly be of lower level and populate quickly a long list of unresolved issues. The priority
should be set to maximize the resources of different level of expertise.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
18/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 17
For instance in response to minor incident a 1st line of support should be involved to clarify the
impact and set the priority accordingly. In case the domain and/or a specific deep technical
expertise must be considered the 2nd line of support then should be involved. The priority
should be carried with the incident description added with the diagnosis of the first line. The
second line of support can change the priority and modify the severity level associated to it.
Regarding the priorities the first level and second level should work as a team following
practicality as for instance:
highest severity level: critical operations failure (highest priority) second severity level incident (standard priority) third severity level or preventive request (lowest priority)
4.4. Schedule and impact assessmentAs we have seen, the priority level is one way to manage the assignment of the resources likeescalation on a more general standpoint is. When immediate resolution is possible either by the
assistance level one or two experts there should be no need for further mechanism. However an
experiment could face outage of key services for a too long duration or require an expertise not
immediately available. In this case the whole experiment needs to be re-assessed and possibly be
re-scheduled. Even we may not fall into this kind of extreme cases, it is necessary to make sure
that it can be handled. When the management of the incident requires a long diagnosis
(depending on the experiment from more than two consecutive days or more) or the
intervention of a specific expert, it is necessary to propose a schedule for the resolution and
estimate the impact on the experiment. To the extreme it may result on closing the whole
experiment or considerably change the planning of it. The core support team must provide adetailed report which will be communicated to venues, experimenters and cascaded to the end
user communities.
4.5. Main actorsWe define actor as a person who will play a specific role during the experiment. Although their
motivations may be different, EXPERIMEDIA consortium must create with the processes of
support installation a global sharing and after time a certain alignment of the objectives.
4.5.1. ExperimentersWe consider all the people who will be using the test bed facility for their experiment. Usually
designers or integrators of their technology, they will be represented by one person to ease the
communication between the different parties. Should experimenter be different entities we
consider that they ought to define one person.
They are the first to be looking for effective and efficient support. In order to insure that our
support is meeting their targeted objective the assessment will clarify and list them, an operating
review report will be sent to them on a regular basis with their agreement and be invited to it in
for specific technical issue treatment.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
19/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 18
4.5.2. Experiment's coordinatorThe experiment could or will be founded on a consortium animated by the experimenter
coordinator. He will be kept in the loop and informed of current performances of the support
and level of incident. He is part of the escalation procedure.
4.5.3. Agent technical domain expert level 2After the support assessment is done, the different technical domain of support is clearly listed.
The operation will have to define the different domain of expertise which will be listed as
inventory first step of the design.
For instance it could be: ethics / business panel VIA issues/ technical content management /
specific meta component domain.
As discussed before a skill matrix will be available filled up by all the partners of
EXPERIMEDIA. The skill matrix is facilitating the Level 1 actions of support venuerepresentative.
The venue representative is part of the support core team on occasion; at first for the support
assessment review to check impact, risk and validate provisioning, to validate the SOW. The
venue representative must participate to support operating review when an incident impacts
and/or is reaching a severity which compromise the overall experiment.
4.5.4. Agent technical domain expert level 1For the entire domains we will identify a level 1 support person responsible to allocate the
person who will play the role of level 2. This person may relay to specific expert when necessary.To allocate respondent to incident and keep communication clear between experimenter,
coordinator of the experiment.
4.5.5. End user communitiesEnd users are directly impacted by an incident. When conducting the support assessment at the
early stage of the design, a detailed report chapter on all potential impact should be part of the
SOW.
4.5.6. Support core teamThe support core team is taking care that decisions and prioritization are taken to satisfy the
experimenter needs at different stage of the experiment. The core team leader is the key interface
to the experimenter to communicate and understand seriousness of an incident.
The support core team reports are relayed to the experimenters through the experimenter
coordinator. The core team is built with a representative of the venue, the responsible of
architecture of EXPERIMEDIA and the level 1 support agent and level 2 if necessary. If
necessary when the situation requires the venue representative should be also represented by a
key actor of the venue ecosystem.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
20/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 19
The support core team is managing the escalation procedure and sure that an incident resolution
is effectively manages at the right level of decision with the right assigned resources.
4.6. Procedure key tasksThe support system procedure encompasses the writing of the support service statement of
work, the management of the escalation with the objective of reducing response time to ease the
experimenter that is part of the statement of work: risk assessment and objective agreement;
escalation procedure in particular like presented before the core flow: from the incident ticket
management and monitoring of progress with the assignment of correction allocation and
prioritization follow up; reporting and support service management.
4.7. The statement of workThe statement of work specifies:
the objectives of the experimenter technical risk assessment in line with the domain check list the provisioning of services and QoS requested names and the contact of the major actors for the experimenter support define the severity level associated to an incident accordingly to the impact review
The document is used all along the experiment as references to ease the usage for the
experimenters of the means for the experiment. It is also a document that contracts the level of
service that experimenter can expect from the support operation system.
4.8. Support system assessmentWhen the experiment is decided (planning known, architecture validated, technical assets needs
known), a support system assessment review must be organized. The objectives are to review the
impacts of technical issues on the experimenter objectives so to define the severity and the
support level required.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
21/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 20
Figure 1. The support system assessment meeting check list
A statement of work is issued at the end of the meeting clarifying the expectation the potential
risk and name of the people who will take the different role. The skill matrix defining who the
experts are is a pre-requisite to conduct this meeting.
4.9. The support service managementOn a regular basis defined in the statement of work, a core team will be named and meet to
administrate the technical support operations.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
22/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 21
Figure 2. The operating support meeting check list
This meeting can also be requested on a situation where the experimenter is facing an issue
related to unresolved incident. The experimenters or its coordinator must call the support core
team leader. Such meeting may involves decision that directly impacts with severity the
experimenter objectives because of support response service level.
4.10. Trouble shooting and escalationAs described earlier the core flow baseline is the keystone of the support system. It allows the
proposer management and proposes resolution technically or not technically.
There are 2 ways to end the process:
1) Either the incident can be technically resolved, the experimenter or originator of theincident in this case is satisfied and inform that the incident is closed;
2) or the incident cannot be technically resolved, the EXPERIMEDIA coordinator is withthe core team leader deciding to review next course of action and inform the
experimenter. In the latter case the incident is ended although there was no technical
resolution. The core team must report the decision in the Statement of work document
to track such a case. The experimenter can review with the core team the impact on its
experiment.
At the end of the experiment each gating will be reviewed and analysed to be properly reported
on the statement of work, it will complete the experiment report. The objective is to review the
overall process to improve the support process.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
23/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 22
Figure 3. The support system procedure
G1: Gating 1: is the resolution technically completed, if no let priority decide how to execute
G2: Gating 2: is the resolution technically completed, if no let escalate with the consent of core
team
G3: Gating 3: can the team find a technical solution satisfying experimenter if yes we close, if nowe inform to EXPERIMEDIA coordinator that incident cannot be closed.
CLOSE: The originator is informed that technical incident is closed
END: The Venue and EXPERIMEDIA coordinator decide how to negotiate the unresolved
incident and review the impact with the experimenter.
4.11. Role and responsibility matrix assignmentThe process described previously involves the different assigned actors from a global prospect,
however it is necessary to precise also who is responsible who is an actor participating to theresolution and finally who must be kept informed.
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
24/25
EXPERIMEDIA Dissemination level: PU
Copyright FDF and other members of the EXPERIMEDIA consortium 2012 23
Table 1. RACI matrix.
R: Responsible (is leader of the task and will be responsible for its delivery)
A: Acting (is executing or helping to accomplish the task)
C: Consulted (must be part of the task as advisor, may not contribute directly)
I: Informed (must be informed of the task results or report should be communicated)
En
dU
ser/
Exp
erim
ente
rte
am
Exp
erim
entC
oordin
ato
r
EX
PE
RIME
DIA
co
ordin
ato
Ven
ue
Rep
rese
nta
nt
Core
Team
Lead
er
Core
Team
memb
ers
Level1
Agent
Level2
Agent
Dom
ain
Ex
pert
Agent
OPEN incident A
GENERATION of the ticket A A/R
ASSIGNMENT of resolver
ALLOCATION I I A/R
Diagnosis results I R
Resolution report R I A
EXECUTION by expert I I I I R A
Final Resolution report I I C I R A
PRIORITIZATION decision C R A
ESCALATION decision I C I R AG1 decision I I R
G2 decision I I I I R C
G3 decision I I I R I I I
CLOSE report R A A A A
CLOSE decision A R
END report R C A A A A A
END decision I R C A I I I I
STATEMENT OF WORK A A C R A A A A
ASSESSMENT MEETING I C A R A A A I*
OPERATIONAL MEETING I* I I* R A A
SKILL MATRIX I R I* A A I I I*
7/31/2019 D3.2.1 EXPERIMEDIA Support Procedures v1.0
25/25
EXPERIMEDIA Dissemination level: PU
d b 24
5. ConclusionA comprehensive support process has been documented here for the benefit of the
experimenters and the rest of the EXPERIMEDIA project. The different actors and their
associated responsibilities have been defined as well as the support procedures available at everystage of the experiment lifecycle.