This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Usability Findings v2
Deliverable D3.5
MPAT File ID: MPATD3.5-UsabilityFindingsv2.docx
Version: 2.0
Deliverable number: D3.5
Authors: Andy Darby (ULANC), Matthew Broadbent (ULANC), Simona Tonoli (MEDIASET), Angela Brennecke (RBB), Simone Hollederer (RBB), Nico Patz (RBB)
Contributors:
Internal reviewers: Christian Fuhrhop (FRAUNHOFER)
Work Package: WP3
Task: T3.2
Nature: R – Report / O – Other
Dissemination: PU – Public / CO – Confidential
Status: Living / Final
Delivery date: 15.08.2017
Version of 15.08.2017 D3.5 – Usability Findings v2
page 2
Version of 15.08.2017 D3.5 – Usability Findings v2
page 3
Version and controls:
Version Date Reason for change Editor
0 19.05.2017 Initial structure Matthew Broadbent
1 22.05.2017 Consumer content added Andy Darby
2 30.05.2017 Integrated content from other
partners
Matthew Broadbent
3 31.05.2017 Content creator content added Andy Darby
4 31.05.2017 Prioritisation complete Matthew Broadbent
5 31.05.2017 Proofreading / Initial submission Christian Fuhrhop
6 08.08.2017 Updates after additional user tests Nico Patz, Angela
Brennecke, Simone
Hollederer
6 14.08.2017 Finalization of extended deliverable Christian Fuhrhop
Version of 15.08.2017 D3.5 – Usability Findings v2
page 4
Acknowledgement: The research leading to these results has received funding from the European Union's Horizon 2020 Programme (H2020-ICT-2015, call ICT-19-2015) under grant agreement n° 687921.
Disclaimer: This document does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of its content.
This document may contain material, which is the copyright of certain MPAT consortium parties, and may not be reproduced or copied without permission. All MPAT consortium parties have agreed to full publication of this document. The commercial use of any information contained in this document may require a license from the proprietor of that information.
Neither the MPAT consortium as a whole, nor a certain party of the MPAT consortium warrant that the information contained in this document is capable of use, nor that use of the information is free from risk, and does not accept any liability for loss or damage suffered by any person using this information.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 5
Executive Summary
This document concludes the second phase of usability testing, part of T3.2. It contains the results of executing the test procedures outlined in D3.4, and forms part of the continual process of implementation, testing and feedback which is at the heart of MPAT.
D3.5 contains the findings of both consumer and content creator-based testing. This has been conducted at multiple partner sites, with a varied set of representative participants. The results of these have been largely positive, showing that the tightly coupled process of testing and implementation has had a positive effect on the overall reception of MPAT, from both perspectives considered key in the overall process.
Nonetheless, we have identified some important areas for improvement, which are provided and summarised at the conclusion of this document. We have also prioritised these given the anticipated impact on the user experience. These will provide the basis for further changes and improvement to MPAT moving forward.
Version of 15.08.2017 D3.5 – Usability Findings v2
Täter-Opfer-Polizei - The HbbTV App on Smart TV 44
Introduction 44
Task-Based Usability Test 44
General Questions on the TOP app (10 Minutes, based on Likert Scale) 46
MiniAttrakDiff General Evaluation of App 47
Glossary 48
Partner Short Names 48
Version of 15.08.2017 D3.5 – Usability Findings v2
page 9
Version of 15.08.2017 D3.5 – Usability Findings v2
page 10
Introduction
Usability forms an important part of the MPAT project. This is evidenced in the constant and ongoing process of planning, testing and evaluation. This is conducted in the context of the ever-evolving prototypes and technologies that are also a core contribution of the project.
In this, the second phase of testing, we move to incorporating the views and findings of the actual consumers; that is, the end-users of the applications built through the use of MPAT. Collecting these perspectives was a near impossible prospect at the start of the project; there was no useable examples, and the project focus was primarily on the content creator interface. However, in this phase of testing, we were able to test with a number of interactive TV applications, built using MPAT as part of other work within the project. This provides a level of realism and authenticity to the testing, and provides valuable feedback to other efforts in the project.
This is not to say that the content creators themselves are neglected in this process. They still play an important role in the understanding usability. Through the collection and collation of this information, issues and problems can be resolved in a timely manner. The benefit of continuously testing, reporting and fixing the tools developed as part of the MPAT project means that the tool will have already been through a number of iterations at the conclusion of the project. As with the user-facing testing, the technology developed in other parts of the project had matured to a level that means that we can now expose users to a functioning version of the tool as part of this testing.
As mentioned previously, the usability testing in the MPAT project is an iterative process. This is illustrated in with this deliverable building on the previous version (D3.2), delivered early in the project. Given the maturity of the project, and the tools built within, this testing is more comprehensive and realistic. This deliverable also implements the testing methodology outlined in D3.4, which is again an iterative document, spanning across different phases of the project.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 11
Consumer Testing
The first set of tests is concerned with consumer; that is, the group of users who will be exposed to the applications built through MPAT. As mentioned previously, the applications used for this testing are either those have already been deployed in a real-world environment, or those that will eventually be. For more details of these applications, please see Task 6.1 in WP6 (Pilots).
The textual description that accompanies each subsection outlines which specification application(s) are used in the test. Similarly, these brief introductions also describe the testing environment and a background for the participants of each test. These subsections are partitioned by project partner, each of which conducted their own consumer-facing usability testing on-site.
As the methodology for testing is outlined in the Test Plan v2 (D3.4), it will not be repeated in detail within this document. However, it is important to note both the commonalities and differences across each of these tests, which enables a level of tentative comparison. These aspects are outlined in each subsection.
Where multiple tests were conducted at the same site, this fact is outlined in the introduction to each subsection. In cases where this is true, the results presented are collected over all test instances. In all cases, full transcripts of the tests can be provided on request, but are not included in this document for brevity and confidentiality reasons.
ULANC
The testing completed at ULANC involved both the Fiat 124 Spider Application and the Band Camp Berlin Application. As both of these applications are presented in a non-native language (Italian and German, respectively), the participants were drawn from those able to speak such; no translation was made, as not to disturb the intended meaning and content of the application.
All ULANC Consumer participants were students at Lancaster University, undergraduate students from the Department of European Languages and postgraduate students from Lancaster Institute for the Contemporary Arts.
The tests took place in InfoLab21, home of Lancaster University’s School of Computing and
Communications. The tests used room A38, setup especially for testing and evaluation. The facilitator, a
Research Associate on the MPAT project, conducted the test and each participant received a very brief
introduction to HbbTV technology. The test subject was sat directly in front of a Panasonic TX-
40CX680B television, the TV was sited on a desk and the Humax FVP-4000T box was situated in front
of the television on the desk. Only the remote control for the Humax was available to the participant.
The test was conducted at a viewing distance of approximately 1.5m. The HbbTV application landing
page was displayed on screen and the participant was only invited to interact with the app. The test was
run without a broadcast signal being available. They were provided with no extra information about what
was expected nor how to interact with the television. The participant was encouraged to navigate the
app as they preferred, throughout an exploration stage and a task stage, as well as to express any
positive or negative feedback on their experience. Clarifying questions were encouraged.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 12
FIAT 124 Spider Application
The Fiat 124 Spider Application user-test recruited post-graduate Design students from Lancaster University. This involved two participants: one male and one female, who were aged 32 and 35, one was an Italian native and the other had spent time living in Italy. They had a strong interest in technology, but interest in cars ranged from low to moderate, therefore can be assessed as potential general viewers for the content proposed in the app developed through MPAT. The questions used to guide the interview can be found in the annex of this document. The findings from these questions is presented below, categorised according to a number of general topics:
Navigation
● The ‘Allestimenti’ page was deemed confusing with a menu system not linking to any additional
content.
● The participant expressed an expectation that navigation would be conducted only through
arrows and OK buttons and the majority of application conventions supported this view. This led
to frustration at the video player when its navigation implementation did not follow the
predominant application convention.
Interactivity
● The participant identified a lack of clarity with regard to the colour-change functionality.
● The autoplay and looping of the video content was considered both invasive and unnecessary.
● The participant found the hotspots unintuitive. They noted that the hotspots would need to be
legible in the manner in which they relate to specific image details, additionally the distance
between hotspot icons impacts on the users’ expectation of how navigation controls will
function. The arrow navigation for hotspots vertical and horizontal navigation pathways was
found to be inconsistent, and as a result was considered frustrating.
● The user noted that the coloured buttons on the home page did not indicate what they are
intended to be used for prior to the home button being activated by pressing OK. Waiting for the
page to be activated before revealing fully what the feature is to be used for felt messy to the
user. In addition, once set the user wanted their colour selection to be stored.
Application Usability
● The display of QR code was felt to be counter intuitive, as it indicated a requirement for another
device while handling a remote control.
● The participant sought a greater degree of localisation, or locally relevant content, from the
application, with particular regard to the promotional aspect.
● The user found the application to be reminiscent of a website and expected similar interactions
as a result. They would have found the site more intuitive if the arrow buttons activated changes
rather than having to activate change through the OK button.
● The user did not enjoy the interaction experience, but noted that it remained usable.
Graphics and Design
● The lack of a cursor style interaction was felt to necessitate a greater degree of visual feedback
to the user. The arrow was considered useful but needed to be more obvious.
● The user found the application body text to be small, and thought it would impede legibility at
the optimum viewing distance.
● The user found the speed of the image rotation distracting and frustrating as it impeded their
ability to read the featured images.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 13
Acceptance and Perception
● Though the participant had reservations regarding several usability issues they felt the application to be straightforward to use.
● The application was considered to have insufficient information content.
● The user rated the application in terms of comfort of use (4/5) and graphics (4/5) as above
average, they considered the application to be generally good. However, they viewed it as
highly unlikely that they would use such an application privately (1/5) .
● Despite the user rating the graphics as average (3/5) they considered the application to be good
and gave it an above average rating for how comfortable it was to use (4/5). However, they
viewed it as highly unlikely that they would use such an application privately (1/5).
Summary
● The requirement for simultaneous use of a second screen in addition to the remote controller
may prove problematic for users.
● HbbTV apps provide a ready opportunity for increased regionalisation and localisation of
advertising and informational content.
● Users expect intuitive interactive elements as part of a consistent navigation experience.
● Users wanted a more fluid navigational interaction based on the use of arrows, more similar to
website experiences.
● In the case of both participants, despite their lack of familiarity with HbbTV apps on Smart TV,
they had no difficulty with navigation and interaction. This is attributable to both their personal
skills and to the similarity to PC use of arrow-keys to navigate.
Band Camp Berlin Application
The Band Camp Berlin Application User-test recruited three undergraduate students from Lancaster University’s Department of Languages and Culture all of whom were studying German language. The group was made up of two male and one female participant, who were aged between 19 and 23. All of them had spent time living in Germany, but none were German residents. Their interest in youth music varied between participants. As such, this group can be considered as having both a general and specific interest in the content within the application. The questions used to guide the interview can be found in the annex of this document. The findings from these questions is presented below, categorised according to a number of general topics:
Navigation
● The navigation is confused on the landing page with the intro video not being in focus.
● The participant was positive about numeric shortcuts, but was slow to find them.
● The navigation is confused on the landing page with the intro video not being in focus.
● When navigating away from some video pages, pressing the back button results in the video
being presented at a smaller size in its own page, pressing back again returns the user to their
expected destination.
● The participant was positive about numeric shortcuts, but was slow to find them.
Interactivity
● The participant automatically used exit to leave the video content, only when they had made
this error did they consider using the back button.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 14
● The participant disliked the autoplaying of video content and expressed a preference for
thumbnail to access video content.
● The participant found the fast forward feature lacked fluidity, as it operated in four second
intervals, as this was the case they felt the video player should offer a progress bar, like other
popular players, in order to allow users to find specific points in a video easily.
● The participant found the fast forward feature lacked fluidity, as it operated in four second
intervals. As this was the case they felt the video player should offer a progress bar, like other
popular players, in order to allow users to find specific points in a video easily.
● The participant automatically used exit to leave the video content, only when they had made
this error did they consider using the back button.
● The participant found the fast forward feature unintuitive and difficult to use.
Application Usability
● The participant found the application relatively straightforward to use, despite encountering
some difficulties.
● The participant highlighted a noticeable lag when using the technology.
● The participant highlighted that help or guidance with the video player would be useful.
● The participant found broken numerical links frustrating.
● The participant found when using the arrow keys on the landing page that the ways in which
vertical and horizontal pathways need to be traversed was not intuitive.
● The participant found the use of the button labeled ‘1’ to be used inconsistently across the site.
● The participant found it strange to be given navigation instructions on the Der Videodreh page,
when most of the learning had to be completed without any other navigational instruction.
● The participant wanted specific help with the fast forward feature.
● The participant highlighted a noticeable lag when using the technology, which led them to
double press the back button, and to skip beyond their intended destination.
Graphics and Design
● The participant found the visual feedback provided was too subtle and expressed a desire for
greater colour contrast.
● While recognising the design was aimed at the target market, the participant found the landing
page to be ‘unprofessional’.
● The participant was very positive about the presentation of the application.
Acceptance and Perception
● The participant consistently thought they were in error when the application did not operate as
expected/should.
● The user rated the application above average in terms of comfort of use (4/5) and average in
terms of graphics (3/5), they considered the application to be generally good. However, they
viewed it as moderately likely that they would use such an application privately (3/5) .
● The user was positive about the hotspots implementation and had high expectations of their
interaction capabilities being extended.
● The user rated the application in terms of comfort of use (3/5) as moderate and graphics (4/5)
as above average, they considered the application to be generally good. However, they viewed
it as moderately likely that they would use such an application privately (3/5).
Version of 15.08.2017 D3.5 – Usability Findings v2
page 15
● The participant was very positive about the entire application and was actively keen to engage
with it further.
● The user rated the application above average in terms of comfort of use (4/5) and highly in
terms of graphics (5/5), they considered the application to be generally very good. However,
they viewed it as highly likely that they would use such an application privately (5/5).
Summary
● Communicate a clear method for users to exit video content.
● As an application is likely to host significant amount of video content the video player should
provide fine-grained fastforward and rewind functionality.
● High levels of colour and size differentiation should be used when providing visual feedback to
the user, additionally consideration might be made to visually-impaired users in setting standard
practices.
● Integration of slideflow and website navigational models should be seamless for the user.
● Navigational instructions should be either offered as needed or stated upfront.
● The video player needs to indicate the range of its functionality and give clearer feedback to
users.
● In the case of all participants, despite their lack of familiarity with HbbTV apps on Smart TV,
they had no significant difficulty with navigation and interaction.
MEDIASET – FIAT 124 Spider Application
Additional insights on the Fiat 124 Spider Application were provided by Mediaset. This testing involved a single male employee of Mediaset, aged 25, an Italian native speaker, and resident in Italy. He is moderately interested in technology and cars, therefore can be a potential target viewer for the content proposed in the app developed through MPAT.
The test took place at Mediaset Pilot Room, with the presence of two Mediaset observers ruling the test phase, and doing a very brief introduction about what is this new HbbTV technology.
The participant was sitting in front of the TV, tuned to the Boing Test Channel. He was only invited to interact with the app. He was provided with no extra information about what was expected to happen on the TV screen. The participant was encouraged to navigate the app as he preferred as well as to express any positive or negative feedback on his experience. Clarifying questions were also encouraged.
Navigation
● The participant surfed the app completely and exploiting all of the functionalities. The red circle call to action was not immediately recognized when appeared on the screen (Boing test Channel) which suggests that any interaction with red button should be either more explicit or last for a longer time. Apart from this the interaction was positive and fluent. The participant had no issues at all in fulfilling all the expected navigation tasks.
● The participant commented that interaction with some functionalities (access/return to main page) is uncomfortable with respect to the remote control used. Nevertheless he mentioned a desire to have a circular menu (the follower of the last item is the first ).
Version of 15.08.2017 D3.5 – Usability Findings v2
page 16
Interactivity
● The participant recognised and understood the interactive elements with the exceptions of a photo slideshow, which he misunderstood as interactive because of the presence of a grey arrow on the main scene of the app. He felt well with respect to the change of command orientations due to the corresponding change in the visual menu.
Application Usability
● The participant effectively controlled the app by using the Remote Control, with no need for extra clarifications. He was warned in advance about a known failure issue in some video playout and tried different approaches to it anyway, meaning the app interface is rather intuitive. The whole app fruition suffers a lag in image transition and information refresh that was remarked by the participant as “annoying”.
Graphics and Design
● The participant found the app’s graphics and design appealing, and would welcome some background sound in place of the channel audio. Arrows were misinterpreted in the page with the “Dettagli” page.
● Navigation through hot spots with left and right arrows was OK but the passage on the U/D arrows to navigate the Main menu was considered graphically a bit confusing.
Acceptance and Perception
● The participant was mostly satisfied with what he was able to experience on TV if compared to other example of TV interactions (MHP, Smart TV Hubs) but noted that he would not really use it to go deeper into information research for it had too few functionalities if compared to a web experience. He would consider this interaction experience more useful for: 1. receiving “alerts/updates for content/services” rather than for “full service/content fruition” 2. if it could interact with his own phone.
Summary
● Despite the participant’s lack of familiarity with HbbTV apps on Smart TV, he had no difficulty with navigation and interaction. This is attributable to both his personal skills and to the similarity to PC use of arrow-keys to navigate.
● The evaluation of the user experience is strictly related to the technical performance and fluidity of content fruition, to the familiarity with PCs/smart phones, to the possibility to follow up with second screen interaction
● The general perception is that content proposed on the main screen, if judged interesting, should immediately enable exploration through the second screen.
● Various known errors when playing the Fiat video negatively impacted the perception of the proposed new technology
● Any interaction proposed through a call to action (red dot) with red button should be either more explicit (adding a test or image) and/or last for a longer time.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 17
RBB – Band Camp Berlin Application
The participant was a 16 year-old school student, who thus represented the target group at whom the Band Camp Berlin app is targeted. He uses linear TV at least 2 hours per day, to view TV films and series. He does not consume additional content via Smart TV and HbbTV.
The test took place within the premises of Rundfunk Berlin-Brandenburg (rbb). In addition to the participant, there were two RBB colleagues who moderated the proceedings and took notes.
To create a suitable atmosphere, the participant was reminded that it was not him who was being tested but the application, and that his criticism was just as important as any praise he might have. He was asked to express his thoughts whenever he felt like doing so.
Navigation
● The participant recognised and correctly used all interactive elements. The differences between classic navigation and Slide Flow’s variant did not cause him any difficulties at any point. Various tasks – navigation, finding and playing videos or interviews, returning to the start page – were easily accomplished by the participant.
● The participant commented that various Help tips would be useful during the Slide Flow view. This was in the context of his wish to skip sub-pages in order to access information located on the last slides of the application. He found the Video Auto-start functionality disruptive, and in this context felt the app was too ‚static’.
Interactivity
● The participant recognised and understood most interactive elements (Episodes, Band, Game, Video Shoot) with the exception of Intro-Video, which he did not recognise as a video. A ‚play’ symbol should be introduced to support understanding of this interactive element.
Application Usability
● The participant effectively controlled the app using the Remote Control, without making further queries. He noted that a menu showing exactly where a user currently is would be useful, as would the possibility of using shortcuts to the Highlights. He found the app’s unpredictable crashing and reloading of sub-pages irritating, as was the sound of the linear TV broadcast during his use of the app.
Graphics and Design
● The participant found the app’s graphics and design appealing.
Acceptance and Perception
● The participant awarded the grade of 2/Good, but noted that he would not use the app personally.
Summary
● Despite the participant’s lack of familiarity with HbbTV apps on Smart TV, he had no difficulty with navigation and interaction. This is attributable to both his innate skills as a member of the target group to the presentation/design of the app, and to the similarity to the use of the PC’s arrow-keys to navigate.
● The evaluation of the app’s usability is clearly dependent upon its technical performance.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 18
● Various updates have negatively impacted the stability of the app, resulting in faulty video playback.
● The unpredictable and repeated crashing of the app will be unfavourably compared by this target group, as users of mainstream video streaming websites and PC/smart phone apps.
In June 2017, Rundfunk Berlin-Brandenburg (RBB) conducted a lab test with seven testers. They were evaluating the ’Täter Opfer Polizei’ app and were drawn from the app’s designated target audience.
In addition to the testers, there were two RBB colleagues who directed proceedings and took notes.
To create a suitable atmosphere, the testers were reminded that it was the app not the testers who were under evaluation, and that criticism was just as important as praise. Testers were asked to express their thoughts whenever they felt like doing so.
The testers were evaluated as ‘familiar with technology’. Of the 7 testers, 2 testers described themselves as being ‘very often’ asked for their technical knowledge; 5 of 7 were asked for their technical advice ‘from time to time’. Similarly, 5 of 7 ‘seldom’ asked others for technical advice, 2 of 7 would ask others for tech advice ‘from time to time’.
Of the 7 testers, 6 testers indicated that they commonly use mobile devices (Smart phones, tablets and laptops) at the same time as using TV. Most testers indicated that they use their devices to find additional information about current programmes, to search for answers to quiz questions or to chat with others. 2 of the 7 testers searched for alternative content.
The evaluation focused on the navigability and usability of the app, as well as whether the tester would consume such accompanying material and whether they find such additional material useful.
Navigation
● Testers had no difficulty with navigation, even though two testers attempted to use the menu button on the remote control to return to the menu overview page.
● Two testers expected that the right-hand side of the screen containing the content overview would be automatically updated. In fact the user has to press the ‘OK’ button twice to achieve this. The need for repeated presses of the ‘OK’ button in order to find additional content or to begin video playback was seen by most testers as requiring excessive interaction. Testers expected automatic updating and triggering of various functionalities. This was particularly the case for the subcategory ‘25 Jahre Kriminalreport’ (“25 years of Crime Reports”), which displayed an info-page with the request to press the ’OK’ button. In this situation, all testers attempted to access the subcategories via the ‘OK’ button, not realising that they could not use the ‘OK’ button until they had selected a particular subcategory’s text element by right-clicking.
● Video player controls were successfully understood and used. ● The ‘Full Screen’ function caused difficulties. Four testers sought a full screen function button
on the remote control and attempted to activate the full screen mode via the settings page of the TV itself.
● Two testers were irritated by the fast forward/rewind functionalities. It was unclear to them that the ff/rw buttons on the remote control should be pressed and held. It was also unclear to them why the rewind function did not work after the video had ended.
● It was not clear exactly in what time increments videos could be forwarded/rewound; all testers noted that a ‘skip’ functionality that allowed jumping to the next sequence was desirable. Testers also noticed that the player controls in ‘normal’ view responded differently compared to ‘full screen’ view. One tester felt that the control menu faded out too quickly.
● Almost all users tried to access the direct access control buttons in the lower part of the app screen (numbers/colour buttons/back button) - intended to be used via the buttons of the remote control - via the arrow keys.
● One tester did not notice that the opt-out function in the ‘Datenschutz’ (privacy/data security)
Version of 15.08.2017 D3.5 – Usability Findings v2
page 19
area had been updated following the pressing of the ‘5’ key. One tester requested that the ‘Turn Off Cookies’ text notice be made clear to every user.
Interactivity
● Testers’ interaction with the various interactive elements was successful and all basic categories were understood. Testers correctly understood which topics would appear under various headings (Sought, Solved, Prevention).
● Some expectations were inaccurate; e.g. under the heading ‘Cartoon’, two testers expected to find child-friendly material.
● In various subcategories, testers expected either more or different content. In the ‘broadcast’ category, four testers expected to find a selection of programmes, and two testers expected profiles of more than one suspect to be displayed in the ‘Sought’ category.
Application Usability
● Testers found the use of menus and general orientation unproblematic, although most testers were often unsure of exactly which level they were currently on and in which sub-category.
● The main menu closes when the user enters a sub-category, and the content does not automatically imply the respective category. This prevents the users from making a direct connection to a particular category.
● In addition some testers suggested that page numbers/range could be displayed, with the amount of navigable content also indicated.
● Testers suggest that arrows only be shown when additional content exists and can be accessed via arrow keys. Arrows on this screen intend to indicate that arrow keys should be used to select or scroll. Users interpreted that there had to be more content both on top and below the visible items.
Graphics and Design
● In general, testers described the design as attractive. However, not all graphic elements successfully conveyed their function to users.
● One tester understood various preview images to be referring to videos that would play upon clicking, rather than their actual function of leading a user to extra pages or single still images.
● One tester failed to recognise the “image” icon and speculated as to the sort of content that might exist beneath this link.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 20
Acceptance and Perception
The Likert Scale provides an evaluation of a user’s perception, acceptance and experience of the tested app. The scale evaluates six aspects, with a relevance range of 1-6 for each response; 1 = not at all, 6 = exactly.
Scale of 1-6 Average value
Standard deviation
The use of the application was immediately clear to me. (Perceived learning curve)
4,71 0,76
The use of the application was not problematic. (Perceived errors) 4,71 0,95
I always knew where I was in the app. (Navigation) 4,71 1,60
I immediately understood the menu descriptions 5,14 0,90
The graphics were clear and understandable 5,57 0,53
The app offers interesting additional content 5,14 0,90
Table: Likert scale evaluating Acceptance and Perception of ‘Täter Opfer Polizei’
● The majority of testers indicated that they found the app immediately clear to use, that it operated without problems and that they felt comfortable using it. Perceived learning effort, error rate and navigation were evaluated at an average of 4.71. Individual perceptions of perceived error rate and assessment of navigation varied, however. With a standard deviation of 0.95 and 1.6 respectively, this indicates a lack of uniformity in the testers’ perceptions.
● Testers indicated that menus were immediately understandable and that the app presented interesting additional content. The average score was 5.14.
● Graphic design was described as very clear and understandable, and was particularly well received with an average value of 5.57.
A second method of analysis is ‘AttrakDiff, providing an overview of a user’s subjective evaluation of the usability and appearance of a product.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 21
Pragmatic Quality/Usability (PQ): the app’s usability was evaluated as 5.9 (average), with very little deviation between individual responses, which is thus an above-average result.
Attractiveness (AT): the general attractiveness of the app, and thus a general assessment, was also generally highly rated at 5.9. In this area however individual responses varied from 5 to 7.
Hedonic Quality (HQ): hedonic quality, reflecting the needs of the user, was assessed as an average of 5.5 and thus this area has room for improvement. HQ consists of the two elements HQS and HQI:
Hedonic Quality – Stimulation (HQS): HQS was rated above average with an average value of 5.2 and with some deviation between testers. There is room for improvement in this area of stimulation/motivation.
Hedonic Quality – Identity (HDI): HQI was rated at 5.7, above average and higher than the HQS score. There was some deviation in users’ evaluations. The app can demonstrably convey its identity to the user.
Summary
● All testers expressed a positive overall impression of the app, and particularly evaluated the graphics as appealing.
● The structure of the app, and its categories and subcategories, were immediately understood and all users could navigate without problems.
● As indicated by the above-average evaluation results, the HbbTV app ‘Täter Opfer Polizei’ was seen as an interesting and engaging means of delivering additional material to accompany broadcast programmes.
Content Creator Testing
This builds upon the initial findings document (D3.2), incorporating the changes and suggestions described within. As described in detail in the Test Plan v2 (D3.4), this phase of content creator testing uses a functioning version of MPAT, and focuses on capturing feedback on the use of the tool to create and build interactive television applications.
As with the consumer testing, this section is partitioned according to each participating partner. Each subsection contains one or more distinct tests. In these, each test is described, both in terms of the specific setting and procedure, as well as a brief overview of the participants in each.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 22
Following this, a set of issues and comments are outlined. These are grouped together according to a broad topic. These are derived from the full transcript, which is not included for both confidentiality and brevity reasons, but can be can be supplied on request. These aspects are then summarised in a final section, which draws together all of these into a concise list, to be used later prioritisation process.
ULANC
The MPAT Creators user-test recruited two SMEs based in Lancaster, North West UK, working across design and video marketing. The group was made up of one male and one female participant, who were aged between 35 and 36, and who worked in director or senior management positions within their organisations. The names of organisations have been anonymised.
Organisation One
The test took place at the company’s offices. The facilitator, a Lancaster University Research Associate
on the MPAT project, conducted the test and a brief introduction to HbbTV technology and the MPAT
project was given by a member of the MPAT team. A series of short pre-interview questions were asked
to gauge experience, knowledge, etc. These are included in the appendix. A developer was on hand to
troubleshoot any problems that may occur. The subject used an Acer touchscreen laptop for the test
running Windows 10, with Firefox (with HbbTV plugin), Chrome, Word and Icecream screen recorder all
open and ready for use. The Chrome browser had an open instance of MPAT open on the dashboard.
The user-tests were audio recorded and screen recorder software was used to capture participant
actions. The series of tasks, as given to participants, are included in the appendix. Throughout the test
clarifying questions were encouraged. Each session was then transcribed by the facilitator at a later
date.
Concept and Workflow
● The participant found the component manager, page, page layout, navigation model
relationships extremely confusing.
● The participant found it instinctive to attempt to move images from within the page itself, rather
than on the page layout.
● The participant attempted to insert an image from within the WYSIWYG editor.
● The participant required extensive instruction to understand the relationship between the page
and a page layout, before they could move an image within a page.
● The participant was left uncertain as to whether their actions had been saved after pressing the
save button, and subsequently lost work as a result.
Styling
● The participant was unable to select or apply a font to the application, or parts of the
application, from the WYSIWYG editor.
● The participant was unaware of hex values, or RGB, and therefore had no strategy for adding a
background colour.
● The participant was unable to self-select a font size as a unique instance or to apply font sizes
to different header levels.
● The participant was unable to select normal following the selection of a header level from within
the WYSIWYG editor.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 23
● The participant was able to select text styles, but was offered a number of features within the
same drop-down menu that they did not understand.
Externalities
● The participant experienced some issues with terminological differences between the manual
and the application.
● The participant was confused by the need to preview in Firefox by first manually cutting and
pasting from within Firefox.
● The participant repeatedly returned to the WordPress dashboard to orient themselves despite it
having no MPAT content.
● The participant preferred to view the manual and application content side by side as a constant
reference.
Functionality
● The icon sets relating to the component editor were difficult for the participant to understand.
● The participant was unable to complete the link task as the link target had no dropdown menu
to draw in the relevant pages.
Role of Application Manager
● The participant expected drag and drop functionality to reorder pages where they were created
and stored rather than in the MPAT application manager.
● The participant found the process of publishing the application confusing and did not complete
the task.
Summary
● The save function needs to give explicit feedback to the user.
● The end-to-end process of building an MPAT application is not clear. It appears to be intended
for multi-person teams, with distinct roles and functions. However, this does not map well to
small organisations with small teams, or even a single person, involved in the whole application
concept, design and creation process.
● Unnecessary features in the WYSIWYG editor should be removed or disabled.
● CSS for background colours, font, size and style should not be set at page level, but at
application level, with overrides allowed at the page level.
● Establish a single set of terms working across the manual and the application.
● Users require reference material to be always available while exploring the application.
● Firefox should be set to open MPAT preview requests automatically.
● Elements extraneous to the MPAT application should be removed from the dashboard.
● Iconography should be legible and support terminology
● Page reordering should be implemented where pages are being created.
● The publication process needs to be transparent at the outset from the application manager.
Organisation Two
The test took place the InfoLab, Room KBC Boardroom on B Floor, at Lancaster University. The
facilitator, a Lancaster University Research Associate on the MPAT project, conducted the test and a
Version of 15.08.2017 D3.5 – Usability Findings v2
page 24
brief introduction to HbbTV technology and the MPAT project was given by a member of the MPAT
team. A series of short pre-interview questions were asked to gauge experience, knowledge, etc. These
are included in the appendix. The subject used an Acer touchscreen laptop for the test running
Windows 10, with Firefox (with HbbTV plugin), Chrome, Word and Icecream screen recorder all open
and ready for use. The Chrome browser had an open instance of MPAT open on the dashboard.
Participant information sheets were read and consent forms read and signed by participants before
each user-test. The user-tests were audio recorded and screen recorder software was used to capture
participant actions. The series of tasks, as given to participants, are included in the appendix.
Throughout the test clarifying questions were encouraged. Each session was then transcribed by the
facilitator at a later date.
Concept and Workflow
● The participant found the component manager, page, page layout, navigation model
relationships confusing.
● The participant was left uncertain as to whether their actions had been saved after pressing the
save button.
● The participant found the component manager boxes that automatically populate the page small
and confusing.
● The participant attempted to insert an image from within the WYSIWYG editor.
● The participant felt that the component manager hid its functionality.
Externalities
● The participant made assumptions based on MPAT being created in WordPress.
Functionality
● The participant requested prompts to act and confirmations of their actions.
● The icon sets relating to the component editor were difficult for the participant to understand.
● Crop function on images did not work.
● The participant attempted to reorder pages through the parent menu.
● The participant inserted an video but did not scroll down to the preview section, and so was
unaware they had successfully done so.
Role of Application Manager
● The participant expected drag and drop functionality to reorder pages where they were created
and stored rather than in the MPAT application manager.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 25
Summary
● The save function needs to give explicit feedback to the user.
● The purpose of the component manager needs to be highlighted to the user.
● The end-to-end process of building an MPAT application is not clear. It appears to be intended
for multi-person teams, with distinct roles and functions. However, this does not map well to
small organisations with small teams, or even a single person, involved in the whole application
concept, design and creation process.
● MPAT should follow WordPress’ conventions where possible, as using it as a base for MPAT
suggests certain ways of working to the user.
● To remediate uncertainty the user requires prompts to act and confirmations of their actions.
● The preview function needs to be foregrounded
● Page reordering should be implemented where pages are being created.
RBB
In D3.4, we had outlined a series of testing phases to be organised by RBB. The tests were intended to lead to improved functionality and usability of MPAT. The tests served as input to a user handbook to support the use of MPAT in a professional environment.
Adapted Test Plan
The plan outlined in D3.4 had scheduled three phases with different tester groups. During the testing period, however, the MPAT editor was undergoing continuous improvement and further development, which made a meaningful evaluation rather difficult.
The test concept has been adapted as follows:
● Phase 1: In March 2017 professional users/editors from MDR in Leipzig, Germany, received access to the MPAT editor. They carried out evaluation tests with the goal of assessing integration of MPAT into a regular service, using their previous experience with integration of new apps. Both content editing and infrastructural questions were equally in the focus of the evaluation. The results of this testing were discussed in a telephone interview on 31 March 2017.
● Phase 2: As part of the preparation of the actual pilot of RBB’s ‘Täter Opfer Polizei’ (24 May - 6 June 2017), internal RBB editors evaluated MPAT from the perspective of potential integration into editorial workflows. Results from earlier content creation tests and the implementation of pilot apps have been evaluated and integrated.
● Phase 3: In June 2017 the team leader of the participants in Phase 1 invested several hours in testing the MPAT Editor Tool and gave feedback on requirements and constraints of a potential integration of the tool in their production chain. The results of this were discussed in a telephone interview on 8 June 2017.
Phase 1
Participants completed a questionnaire about their habits in media usage and were asked to complete various tasks using the MPAT handbook, in order to identify strengths and weaknesses of the MPAT Editor Tool and its documentation.
The first Content Creator test group consisted of two editors from Mitteldeutscher Rundfunk (MDR) - a sibling regional broadcaster in the ARD Network - both from MDR’s ‚Interactive Desk’ team. As part of this team, they are responsible for all aspects of social media applications, and also for selection, testing and integration of editorial tools for use in content production within MDR, such as Pageflow. This team tests the suitability and relevance of new apps, and supports their integration into the MDR workflow. Thus, these participants were the ideal target group to evaluate the usability and likely integration of MPAT into regular broadcast services.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 26
The tests were carried out individually by both MDR editors following a teleconference in which they received a detailed introduction to MPAT, along with full supporting documentation and information regarding the goals of testing.
Phase 2
The editors of the online websites complementing RBB’s weekly TV show ‘Täter Opfer Polizei’ were involved in the preparation of the content of the HbbTV App for the same programme. They received a training before and continuous support during the content editing phase in May.
All comments during training and support were recorded in writing and communicated towards the MPAT developers. After the first pilot phase, there was a group interview between RBB’s MPAT project team and the two editors who had worked with the Authoring Tool. The focus of this interview was on workflows (rather than usability, as these details had been communicated before the interview).
Phase 3
The Head of Department at MDR’s Multimedia Production had been given access to a dedicated test instance and the most recent version of the user manual, but did not get any training or introduction to the tool. This ambitious decision was not a coincidence, but was meant as a real life-test of the quality of the user manual next to the evaluation of the tool as such.
Outcomes
The overall estimation of the MPAT system, including its Authoring and Output quality, varied between testers and test phases with a tendency towards rising acceptability.
The results of the tests consisted of direct comments and suggestions for improvement of the MPAT editor itself. Additionally, there were a smaller number of comments regarding the handbook. All comments and suggestions have been gathered below.
Some of these comments and questions had, in fact, already been addressed and highlighted in the MPAT handbook, such as issues regarding optimal picture size for background images.
Our conclusion was that various issues were dealt with in the handbook under the wrong headings; the above query regarding optimal picture size was in fact included within the ‘General Information’ section of the handbook. It was therefore decided that such fundamental information should be dealt with within the relevant component section of the manual or at least linked from there.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 27
Editor Interface & Usability
Following testing, various suggested improvements have already been integrated into MPAT and are now available for further evaluation. A selection of the most important remarks is given in the following table:
Usability (General) - all in all the use of the
Authoring Tool requires a lot of training and/or
support. In this state it is not ready to be used
by content editors with limited technical
competences and flexibility
Over the various test phases feedback has
become more positive, most probably as a
result of multiple changes that had been
triggered by the earlier tests
Page Layout – positioning and working with
display boxes was very difficult; improved
editing functionalities were requested; use of
page layouts for other apps was
expected/requested.
Grid visualisation and improved handling
(readable information on size and position)
have been realised; Page Layouts are still not
available across applications
Editing Pages - it would be very useful, if
editing of styles and graphics could be
separated from content editing. This would
ensure that content editors don’t harm
predefined styles, e.g. by copy-pasting content
from word processing software
The new Page Model paradigm made an
essential step in this direction
Navigation – options of navigation and
changing the page sequence were not clear.
The Application Manager is not easy to find;
changes have been planned
General User Feedback – Feedback during
‚save’ would be helpful.
There is a notification now
Colours – colour editing is currently only
possible using HTML coding. A colour selection
chart would be more user-friendly.
Such charts are now available in both the
Customizer and in the Component Style Editor
Graphics – functional adaptations to graphics
(i.e. scaling, cropping etc.) should be
accessible via the media library. Information
about optimal graphic size should also be
included here.
Such adaptations are possible on the
component level – this is regarded as a feature
rather than a bug.
Optimal size of images is defined by the box
size in the Page Layouts. However, these
parameters are not visible in the Page Editor.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 28
Participant feedback has been partially integrated in parallel with the testing phase (i.e. improved handling of page layout and boxes), while other points have not yet been realised, such as tool-tips and additional information about various components.
MPAT in Use
In addition to participants’ responses to the MPAT editor questions of how and when MPAT could be integrated into MDR’s regular services were a major topic. As the tests were primarily aimed at the creation of HbbTV applications, issues of integration had not been included in the tests and questionnaires.
Issues and questions about integration of MPAT into regular service nonetheless arose during testing. These issues included:
● How can a connection to existing but different CMS be flexibly managed? ● How can data security be ensured? ● How can collaboration between various systems and departments be achieved? What kind of
account management would be needed to achieve this? ● How could quality control, both of technology and of content, be achieved? ● What kind of deployment environment would be necessary for use of MPAT in regular
broadcast services?
These queries apply to every environment in which MPAT is to be used. New technical developments and requirements also are significant to these questions. Their clarification is central to the successful integration of MPAT into the production environment, and this aspect was incorporated into these tests.
Several general requirements or requests towards such a tool were mentioned:
● A web-based system outside the broadcaster’s responsibility would be desirable - provided that the servers be placed and managed in the EU.
● (Semi-)automatic content updates would be highly desirable. Although this goes way beyond MPAT’s original scope, interest in such a solution seems to be strong.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 29
Findings Summary and Prioritisation
In this section, we summarise the findings from both the consumer and content creator testing. We also draw out a series of aspects to be considered important for improving the usability of MPAT. These are then prioritised in accordance with anticipated impact on overall user experience, with some consideration for the ease of the modification necessary.
Consumer Testing
FIAT 124 Spider Application Summary
Common amongst all participants is the expectation of experience similar to that found on both web-
based environments, as well as mobile-based platforms (such as smart-phones or tablets). If this
expectation was broken in some way, then participants were keen to highlight the difference in
interaction. Nonetheless, the majority of participants were comfortable using the application, despite a
lack of previous experience with such. The remaining issues are grouped and summarised below:
Second Screen Interaction
Although relatively basic in its premise, the requirement for simultaneous use of a second screen in
addition to the remote controller was identified as problematic for some users. The general perception is
that content proposed on the main screen, if judged interesting, should immediately enable exploration
through the second screen.
Navigation
Specification to this application and presentation is that participants wanted a more fluid navigational
interaction based on the use of arrows, more similar to a website experience.
On a different note, any interaction proposed through a call to action (red dot) should either be more
explicit (adding a test or image) and/or last for a longer time. This is present only when the application is
launched from a live stream (as with the MEDIASET tests), and not independently (the case for the
ULANC testing).
Band Camp Berlin Application Summary
As with the FIAT 123 Spider Application, there was a similar expectation in the interaction, driven by previous experience with web and mobile-based applications. Participants were often quick to point out if this expectation was broken in any way – clearly this is the benchmark for MPAT applications moving forward. Despite this, participants could see a similarity to existing interaction technique, and quickly adapted to the limitations posed by a remote control. The remaining issues are grouped and summarised below:
Video Playback Control
There was no clear way for participants to precisely control video playback, including exiting. This is
particularly problematic if the content duration is significant, as users may want to seek to various points
in the video. The video player needs to indicate the range of playback available, as well as other
possible functionality. This should be through clearer hints and feedback to users.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 30
Accessibility
The levels of visual differentiation during navigation was an issue for some participants. This could be
achieved through colour or size differentiation, and would be key in catering for visually-impaired users.
This is in line with standard practices for catering for the maximum amount of potential users.
Navigation
There was a distinct lack of navigational instructions. This was only an issue if the participant was not
familiar with such a TV-based application (a small, but nonetheless important, group). It suggests that
may they require a learning phase, and are currently somewhat unintuitive. However, this was not
common amongst all participants; perhaps an opportunistic or optional assistance is required.
Stability
During the usability testing, the application itself would fail. This includes video playback failing, as well as the application crashing both completely and unpredictably. Given that users expect a level of experience and reliability aligned with that of both websites and smartphone-based apps, this directly violates this expectation, and prevents them from experiencing the app to its fullest.
Prioritisation
Given the aspects identified above, we assign them both an importance (in regards to the expected impact on overall user experience) and an anticipated difficulty (in terms of overall time and effort necessary to remedy the issue); this is shown in the table below. We elaborate on the level given to each in the two tables further below.
Aspect Importance Anticipated Difficulty
Second Screen Interaction Low Moderate
Navigation Moderate Low
Video Playback Control Moderate Low
Accessibility High High
Stability High Moderate
Importance Legend
Importance Description
None Does not require any further action.
Low Does not impact core functionality.
Moderate Impacts core functionality, but is still usable without modification.
High Directly impacts user experience, preventing users from accessing or using the
Version of 15.08.2017 D3.5 – Usability Findings v2
page 31
application as intended.
Anticipated Difficulty Legend
Anticipated Difficulty Description
Low Minimal effort required to fix.
Moderate Non-negligible amount of time and effort required, but still achievable within the scope of the project.
High Significant amounts of time and effort will be needed, including changes to the core of the platform, and or the underlying technology and it’s specifications.
Content Creator Testing
Summary
In this section, we take the findings presented in the previous individual usability tests, and attempt to group them. These are then prioritised in the following section.
WordPress
MPAT is based upon WordPress. This provides an air of familiarity to users. However, if MPAT breaks
with WordPress convention, this can lead to confusion and a poor user experience, as users are
expecting a certain way of working, which may not hold true. To prevent confusion, it would be
beneficial to disable, remove, or hide unused features of WordPress, as these too cause unnecessary
confusion, which directly impacts the usability of MPAT.
Workflow
It is clear that the current workflow needs to be examined; it is not intuitive, and requires extensive
documentation before usage is even possible. It also not sufficiently generic to cater for the different
user groups to which MPAT is aimed at; working with the various permutations here, regardless of
numbers, team roles, experience, background, etc. is key to achieving true usability here.
Page Ordering
There are specific issues about the reordering of pages, especially in the ‘Page Flow’ navigational
model. It should likely be moved to the page creation pane, rather than a dedicated panel.
Editing and Layout
The background colour, font, size and style should not be set at page level, but at application level, with
overrides possible at the page level. Simplifying the modification of style, particularly in the case of
colours, is currently only possible using HTML coding. A colour selection chart would be more user-
Version of 15.08.2017 D3.5 – Usability Findings v2
page 32
friendly. Positioning and working with component boxes is unintuitive and complicated. This should be
in-line with industry standards, de-facto or otherwise. Similarly, image and video related actions (i.e.
scaling, cropping etc.) should be accessible via the media library. Information about optimal graphic
size should also be included here. Default layouts contain exceptionally small default components;
these lead to confusion at an early stage for novice users, especially when they are not big enough to
contain the full icon set – it is not obvious that they are indeed components.
Documentation and Help
When the save function is used, it needs to provide explicit feedback that it has indeed worked. This is
particularly problematic due to the critical nature of the operation, and its pivotal role in application
development. Related to the workflow mentioned above, but the purpose of the component manager
needs to be highlighted to the user. If documentation and or interaction is not intuitive, then this
uncertainty needs to be remediated through the the use of prompts and confirmations to ensure
intended actions are taken. The terminology and iconography used across the application and manual
should be consistent. In its current state, users require reference material to be always available while
exploring the application - this should be integrated into the interface itself, links provided within the
interface (specific if possible), or smaller tooltips present alongside individual settings/functions.
Prioritisation
As with the consumer findings, we attempt to assign a priority to the various issues identified above. This includes assigning both an importance (in regards to the expected impact on overall user experience) and an anticipated difficulty (in terms of overall time and effort necessary to remedy the issue); this is shown in the table below. We elaborate on the level given to each in the two tables further below.
Aspect Importance Anticipated Difficulty
WordPress Low Low
Workflow Moderate Moderate
Page Ordering Moderate Low
Editing and Layout Moderate Moderate
Documentation and Help High Moderate
Importance Legend
Importance Description
None Does not require any further action.
Low Does not impact core usability, but assistance and clarity may still be sought by novice users.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 33
Moderate Impacts usability, but is still usable with extensive use of documentation and assistance.
High Directly impacts user experience, preventing users from accessing or using certain features and/or achieving the intended results.
Anticipated Difficulty Legend
Anticipated Difficulty Description
Low Minimal effort required to fix.
Moderate Non-negligible amount of time and effort required, but still achievable within the scope of the project.
High Significant amounts of time and effort will be needed, including changes to the core of the platform, and or the underlying technology and it’s specifications.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 34
A. Annex
ULANC Consumer Evaluation – Band Camp Berlin
Version of 15.08.2017 D3.5 – Usability Findings v2
page 35
Version of 15.08.2017 D3.5 – Usability Findings v2
page 36
Version of 15.08.2017 D3.5 – Usability Findings v2
page 37
ULANC Consumer Evaluation – FIAT 124 Spider
Version of 15.08.2017 D3.5 – Usability Findings v2
page 38
Version of 15.08.2017 D3.5 – Usability Findings v2
page 39
Version of 15.08.2017 D3.5 – Usability Findings v2
page 40
ULANC Creator Evaluation – MPAT Application
Version of 15.08.2017 D3.5 – Usability Findings v2
page 41
ULANC Creator Evaluation – MPAT Pre-Questions
Version of 15.08.2017 D3.5 – Usability Findings v2
It was likely that the invited testers had participated in earlier HbbTV app testing and thus were familiar with the rbb Innovation Projects team; this was clarified at the outset in order to provide a relevant level of detail when subsequently explaining the testing to be undertaken.
The EU project MPAT has developed an authoring tool enabling programme editors to create small HbbTV apps to deliver additional content to accompany broadcast programmes. Users access these apps via the launch bar (known as the ‘Startleiste’). In this test, participants used the MPAT-developed ‘Täter Opfer Polizei’ app that accompanies the programme of the same name. There were two main areas of investigation in this user test – attractiveness and relevance:
● Is the app attractive and enjoyable to use? ● Does the app offer interesting additional content?
The test addressed important issues of content, scope and implementation of HbbTV apps. This helps us to evaluate the potential of the app as a means of delivering additional content, and also provide insight into the technical potential of the app as a content creation tool.
It was important to communicate to the testers that they were not being tested, and that we appreciated their efforts in helping us improve the app. In this context, there was no such thing as a ‘wrong answer’ – every response from a tester helps us to improve our service.
Testing Procedure
The test consisted of various sections, and was attended by two team members, an interviewer and a note-taker. The testing duration took about 60 minutes.
First, the tester was asked a few questions about his/her general use of media. This took some minutes.
Then the tester was asked to use the app. We requested a number of tasks to be carried out and we asked some general questions about the tester’s perception of the app. This section took about 30 minutes. We were interested in honest comments, so positive and negative comments were both equally welcome. Testers were requested to ‘think aloud’ as they used the app and to vocalise any frustration or irritation.
Use of Media
1. Do you own a Smart TV?
2. If so, how often do you use it?
a. daily
b. a few times per week
Version of 15.08.2017 D3.5 – Usability Findings v2
page 43
c. once per week
d. seldom
e. never
3. Which HbbTV apps do you use and why? (i.e. media library, weather app, etc.).
4. Do you use RBB’s internet content? This question refers both to the general RBB website
www.rbb-online.de as well as to programme-specific pages such as Praxis, Abendschau, etc.
5. Do you use other interactive apps to discover news (such as Facebook, Twitter, on-line
newspapers, Google News, etc.)?
6. Do you use a Smart phone, tablet or laptop in conjunction with your TV?
7. How often do other people ask you for technical advice about use of TV and internet?
Version of 15.08.2017 D3.5 – Usability Findings v2
page 44
8. How often do you ask others for technical advice about the use of TV and internet?
a. very often
b. now and then
c. not very often
d. never
‘Täter Opfer Polizei ‘- The HbbTV App on Smart TV
I. Introduction
During the broadcast of the ‘Täter Opfer Polizei’ programme, the app can be accessed from the RBB launch bar. For the purposes of this test, we launched the app using an RBB Easter-egg code, i.e. the app was only accessible after opening RBB’s HbbTV Text and entering a PIN.
II. Task-Based Usability Test
1. Initial impressions and expectations
What do you see? Tell us what you think you might be able to do with the various on-screen
elements, and what you expect from various menus.
Initial impressions – in reality
Using the remote controller, explore the app and its functionalities. What happens? Does it meet your
expectations? Please speak aloud as you explore.
Play video of the most recent episode
Version of 15.08.2017 D3.5 – Usability Findings v2
page 45
Begin playback of the most recent episode on the TV.
Select and deselect full screen mode
You can watch the video in full-screen mode. Select full screen, then select the normal view and
return to the main menu.
The ‘Prävention’ Area
Open the ‘Prävention’ area. You will notice a selection of images on the right side. What do you think
is their significance? Can you navigate using these pictures?
Select, play and pause specific video
Open the ‘Cartoon’ section and play the ‘Nicht Gewonnen & Zerronnen" video. Advance the video,
pause it, and then return to the main menu.
Accompanying text, text navigation
Open the ‘25 Jahre Kriminalreport’ section and select ’Der Tunnelraub von Steglitz’. Read the last
sentence in the report.
Data protection and opt-out
Version of 15.08.2017 D3.5 – Usability Findings v2
page 46
The app collects user data for statistical reporting. Please turn off this functionality.
Return to the main menu and select ‘Datenschutz’ with the numerical buttons of the remote controller.
III. General Questions on the TOP app (10 Minutes, based on Likert Scale)
Please rate on a scale from 1-6 (1 = Does not apply not at all, 6 = I fully agree).
1. The use of the app was immediately clear to me.
(Experience of learning effort)
2. The app worked properly. (Experience of Errors)
3. I always knew where I was in the app. (Navigation)
4. The menu captions were immediately clear.
5. The graphics were clear and understandable.
6. The app offered interesting additional content to the current programme.
7. I would prefer watch the programme using the app rather than the Mediathek.
8. How likely is it that you would recommend this app to a friend? (0 -unlikely/ 10 – highly
likely)
Version of 15.08.2017 D3.5 – Usability Findings v2
page 47
IV. MiniAttrakDiff General Evaluation of App
Here is a collection of word-pairs referring to the TOP app. They offer extreme evaluations of particular aspects of the application. Select a position within the two extremes that describes your experience.
Don’t spend too long thinking about it – enter your spontaneous evaluation. Even if you think the word pairs are not particularly relevant to your experience, enter an evaluation anyway. Remember that there are no ‘correct’ or ‘incorrect’ answers – it’s your personal view that we are interested in.
Version of 15.08.2017 D3.5 – Usability Findings v2
page 48
Glossary
CA Consortium Agreement
CoA Coordination Agreement
DoW Description of Work
EC European Commission
IPR Intellectual Property Rights
NDA Non-disclosure agreement
PO Project Officer
QA Quality Assurance
R&D Research and Development
WP Work Package
Partner Short Names
Short Name Name
FRAUNHOFER Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. (DE)