Top Banner
Do you trust me? Experimenting on the creation of dialogue in recommender systems Tobia Marconi
110

Do you trust me? - POLITesi

May 08, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Do you trust me? - POLITesi

Do you trust me?Experimenting on the creation of dialogue in recommender systems

Tobia Marconi

Page 2: Do you trust me? - POLITesi

Do you trust me?Experimenting on the creation of dialogue in recommender systems

Tobia Marconi

Page 3: Do you trust me? - POLITesi

Master of Science Digital and Interaction design

Author Tobia Marconi

Academic year 2018-19

Supervisor Margherita Pillan Co-supervisorLaura Varisco

A mio nonno Giorgio,per essere un inestimabile modello di vita

Page 4: Do you trust me? - POLITesi

Contents

Page 5: Do you trust me? - POLITesi

VIIVI

AbstractIntroduction

Part 1 - Literature Review1 - Legit risk or unjustified anxiety? Trust issues in technology 1.1 - Technology is dangerous, don’t trust the system

1.2 - Nobody’s spying on you, trust the system

1.3 - User awareness is key

2 - Unveiling the black box. How recommender systems work 2.1 - The need for filtering: overcoming information overload

2.2 - A brief history of recommendation agents

2.3 - How they work? A taxonomical approach

2.4 - Present and future of RS: the application of Artificial Intelligence

2.5 - Stakeholders in recommendation and Ethics

3 - Do you trust me? User experience in recommender systems 3.1 - Beyond algorithms: user-centric evaluations

3.2 - Trustful relationships last longer

3.3 - Dialogue for trust: self-disclosure and reciprocity

3.4 - User control

3.5 - Recommender transparency

Part 2 - Research4 - Establish a dialogue. Goals of the Research 4.1 - Research through design

4.2 - Goals and expectations: research questions

IX1

11

12

15

18

21

22

25

27

35

37

43

44

48

50

52

54

61

62

66

Contents

69

70

71

74

78

79

80

81

111

127

135

155

156

160

164

169177200

5 - Design with the users. Set up a methodology 5.1 - Identify boundaries of research: entertainment services

5.2 - Design thinking & User-Centred Design

5.3 - Research roadmap

5.4 - Address research questions

6 - Building trust. Going through the process 6.1 - Analise the state of the art

6.2 - Understand users’ mental models

6.3 - Define the elements of interaction

6.4 - CoDesign with the users

6.5 - Prototype, test, iterate

Part 3 - Output7 - Dialoguing systems. Design guidelines for trustworthy recommendations 7.1 - Dialogue is all about balance

7.2 - There’s more than just recommendations

7.3 - To each their own asset

ConclusionsBibliographyAknowledgments

Appendix A - Case studies analysisAppendix B - Mental Model DiagramsAppendix C - Script of the focus group

Page 6: Do you trust me? - POLITesi

Abstract

Page 7: Do you trust me? - POLITesi

XIX

Abstract

Trust in technology is tricky. It exposes to risks when it is too high or leads to missing important opportunities

when it is too low. Reciprocal awareness through communi-cation seems to be a good solution to guarantee balance in this situation.

Recommender systems are information retrieval tech-nologies that support the personalisation of content in di-gital services and with the advent of big data, they became ubiquitous to challenge information overload. With their complexity and the introduction of machine learning, they are the perfect example of opaque technology (black-box). Their application raises several ethical issues and recom-mendations are often sources of privacy concerns and the perception of being spied on, making them the perfect envi-ronment to explore the themes of trust in technology.

Recommender systems received a lot of attention on the technical side and the optimisation of algorithms while little has been done on the side of human-computer interaction in the last years. For this reason, it is crucial to explore what user experience design can do to contribute to the develop-ment of this kind of technology.

Starting from an established user experience evaluation framework for recommender systems, this thesis introdu-ces the concept of Dialogue based on the acknowledged concepts of Transparency and control. The research aims at experimenting with this new concept and exploring its effects on Trust towards recommender systems, to de-monstrate its efficacy and legitimate its application. The research is conducted through the means of design and its processes, and its goal is to understand how to evalua-te the quality of the key concepts in existing recommender systems or during the design process and to identify a set of good design patterns to implement Dialogue in interacti-ve recommender systems. Based on the results from this experimentation, the output of the thesis is a set of guide-lines for the design of trustworthy recommender systems based on the concept of Dialogue.

English

Keywords

Trust, Recommender

systems, User experience,

Dialogue, Transparency,

Control, Human-Computer

Interaction, Awareness.

Avere fiducia nella tecnologia è delicato. Espone a diversi rischi quando eccessiva mentre porta a perdersi gran-

di opportunità quando è troppo bassa. La consapevolezza reciproca, ottenuta attraverso una buona comunicazione, sembra essere una valida soluzione per garantire un equi-librio nei confronti di questa situazione.

I sistemi di raccomandazione sono tecnologie di recupe-ro delle informazioni e personalizzazione dei contenuti nei servizi digitali e, con l’avvento dei big data, si sono diffusi ovunque per far fronte al sovraccarico di informazioni. Per la loro complessità l’impiego del machine learning rappre-sentano il perfetto esempio di una tecnologia opaca, che non permette agli utenti di comprenderne il funzionamento. Il loro utilizzo solleva diverse questioni etiche ed è spesso la causa di preoccupazioni per la privacy come la sensazio-ne di essere spiati. Questo li rende l’ambiente perfetto per esplorare il tema della fiducia nella tecnologia.

Fin’ora questi sistemi hanno ricevuto attenzione princi-palmente dal lato tecnico di ottimizzazione degli algoritmi, mentre poco è stato fatto dal punto di vista dell’interazione uomo-macchina. Per questo motivo, è cruciale esplorare come possa contribuire il design dell’esperienza utente allo sviluppo di questo tipo di tecnologie.

A partire da un modello affermato di valutazione dell’e-sperienza utente dei sistemi di raccomandazione, si intro-duce il concetto di Dialogo basato sui consolidati concetti di Trasparenza e Controllo. La ricerca va a sperimentare questo nuovo concetto per evidenziare possibili effetti sulla fiducia riposta nei sistemi di raccomandazione, così da mo-strarne l’efficacia e legittimarne l’applicazione. La ricerca è condotta attraverso strumenti e processi del design al fine di capire come valutare la qualità dei concetti proposti in sistemi di raccomandazione esistenti o durante il processo di design e di individuare delle buone pratiche di design per applicare il Dialogo nei sistemi di raccomandazione inte-rattivi. Sulla base dei risultati della sperimentazione, sono redatte delle linee guida per la progettazione basata sul Dialogo mirate alla fiducia nei sistemi di raccomandazione.

Italiano

Parole chiave

Fiducia, Sistemi di raccomandazione, Esperienza utente, Dialogo, Trasparenza, Controllo, Interazione uomo-macchina, Consapevolezza.

Page 8: Do you trust me? - POLITesi

Introduction

Page 9: Do you trust me? - POLITesi

32

Introduction

Being a tech enthusiast since I was a child, the urge to treat the following topics came by witnessing how many

people around me were scared by technology, and how many opportunities they were missing by lacking the inte-rest or the experience necessary.

The idea developed over a concept of Kevin Kelly from his book What technology wants.

“...I’ve somewhat reluctantly coined a word to designa-te the greater, global, massively interconnected system of technology vibrating around us. I call it the “technium”. The “technium” extends beyond shiny hardware to include cul-ture, art, social institutions, and intellectual creations of all types. It includes intangibles like software, law, and philo-sophical concepts. And most important, it includes the ge-nerative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhan-cing connections.”

(Kelly,2014)

Kelly describes this ensemble of technology as if it was an organic autonomous kingdom of species, that follow an evolution, driven by rules similar to the ones that guide the evolution of the animal species. Rules that sometimes seem very similar if, for example, we consider the slow in-cremental evolution of organic creatures and the concept of adjacent possible from Steven Johnson (2011) in his book Where Good Ideas Come from: The Natural History of Inno-vation. From the title, we can guess how Johnson makes a similar analogy between innovation and natural evolution. The idea is that the technium has an intrinsic push towards innovation. Technology is like an organism, adapting and evolving under this intrinsic push and human agency. An organism that has just crossed the threshold of autonomy and independence, with processes that fall outside human comprehension itself.

From this premise, it is very easy to envision near futures where technology is not a tool under our control anymore,

What technology

wants

Page 10: Do you trust me? - POLITesi

54

Introduction

but rather a collaborator that we need to trust.The artist Sougwen Chung is already living in this fu-

ture through her artistic process. Chung creates painting and drawing with the collaboration of machines. Her first attempts explored how a machine mimics a human, but soon she had to face that because of the limits and errors of the machine it was not only the technology to follow hu-man action, but as she needed to adapt and compromise with this unique imperfection she was as well following the machine, that became artist and collaborator, rather than just a tool. It’s experiments evolved exploring the concept of memory (by training an artificial intelligence with a vast col-lection of her previous works) and then increasing the idea of collaboration by using groups of machines connected with human behaviours through computer vision of an en-tire city. A collaboration of the entire human-technology community of New York became the author of this piece of art.

It is very interesting to look at technology as a non-hu-man collaborator to educate ourselves to possible futures. The work of Sougwen Chung shows how by collaborating with machines and technology we can learn from each other and the importance to know and understand each other. Unfortunately, while machines are continuously learning from us, as people continue to see them only as tools, we do not learn from them at the same pace. Because of the level of complexity that technology is reaching, it starts to fall out of our direct control, it starts to feel like an unknown stranger. Humans are usually scared of the unknown.

For these reasons my opinion is that to embrace the next wave of innovation and benefit of its opportunities, we should stop to look at technologies as slaves to give orders and approach them more as collaborators, understanding and listening to them.

In order to achieve this, we need to develop effective to-ols for communicating with technology.

My intention with this thesis is to contribute to the deve-lopment of such tools and methodologies, by working on the concepts of trust and dialogue applied to human-machine interaction.

The technologies at stake for this experimentation and exploration of trust and dialogue are recommender systems. Recommender systems are part of information retrieval and

personalisation technologies. Their algorithms are able to collect user preferences and characteristics and provide personalised pieces of information. In this way, the user will face only a tailored selection and will not be overwhelmed by researching what they like or need among the endless variety of information present in today’s networks.

The choice fell over this technology for several reasons. Recommender systems involve sophisticated and complex algorithms and often make use of machine learning and other applications of artificial intelligence, making them a perfect instance of those autonomous technologies men-tioned before. They are very diffused, almost ubiquitous in digital services and they influence the lives of many people daily, also involving a variety of ethical issues. As a result of these issues, like the ones related to privacy, they are a per-fect case of application because the personalisation they provide is a major cause of the feeling of being spied, due to the opacity and misunderstanding of their processes. Some services allow people to opt-out from this kind of persona-lisation, browsers have incognito mode, but as information keeps growing and the use of big data becomes a standard, take advantage of powerful tools of information retrieval is not really on option anymore. It is essential to design for minimising the bad effects and improving the good effects of this technology instead of trying to get rid of it. The is-sue should not be whether to have personalization or not, but find new ways to design it morally (Bozdag, 2015). This technology has a very wide and active research communi-ty, and receive interest both from academic and business research. The academic literature is another strong reason because it draws the limits and directions of further rese-arch in this field while it guarantees a solid base of know-ledge. Among others, the work made over the years by two authors, Yu Chen and Pearl Pu, on the user experience of recommender systems and its evaluation were crucial for drafting the main concept of the thesis.

The thesis is made of three parts. First, a review of the literature, to set the ground knowledge that supports the motivation (trust and technology acceptance) in chapter 1, analyse the state of the art of the medium (recommen-der systems) in chapter 2 and generate the main concept of dialogue in chapter 3. Second, the body of the research, showing the overall process from the definition of objecti-

Non-human

collaborators

Trust and dialogue

Why recommender

systems?

Structure of the

thesis

Page 11: Do you trust me? - POLITesi

76

Introduction

ves (chapter 4) to the execution of the experimentation (chapter 6), passing for the definition of the methodology followed (chapter 5). Third, a collection of insights synthesi-se the knowledge collected during the research in the form of design guidelines (chapter 7), to shape this knowledge in a useful way for further developments.

The goal of this research is to prove the efficacy of desi-gning dialogue between a user and a recommender system in order to foster trust from the first in the latter. Expecta-tions are that communication between human and techno-logy can be improved through dialogue by leveraging direct interactions between system transparency and user con-trols. Through this process, user trust in the system should increase, resulting also in more reliable results from recom-mendations and the perception of better user experience in general. Hoping to give my own small contribution to the de-velopment of a paradigm of collaboration between humans and technology by providing a new direction for creating trustful interactions.

Goals and

expectations

Page 12: Do you trust me? - POLITesi

Legit risk or unjustified anxiety?

Unveiling the black box

Do you trust me?

Trust issues in technology

How recommender systems work

User experience in recommender systems

Literaturereview

Page 13: Do you trust me? - POLITesi

Legit risk or unjustified anxiety?Trust issues in technology

1.1 - Technology is dangerous, don’t trust the system

1.2 - Nobody’s spying on you, trust the system

1.3 - User awareness is key

1110

The first chapter exposes possible issues concerning trust towards technology. The two sides of the coin,

where on one side placing trust on services and companies can lead to exposure to a variety of risks and on the other side, manifestations of distrust can lead to generalised ca-ses of anxiety, paranoia and lead groups of people to miss the opportunities coming from new technologies. The first part of the chapter will investigate the ethical issues around information technology and highlight risks of personalisa-tion and the use of recommender systems that will be fur-ther developed at the end of the second chapter (see 2.5). The second part will discuss the consequences of distrust

18

15

12

Page 14: Do you trust me? - POLITesi

1312

Part 1 - Literature review 1 - Legit risk or unjustified anxiety?

and the development of social-psychological phenomena of Technophobia focusing on cases and implications of Tech-no-paranoia (Khasawneh, 2018).

In conclusion, it is supported the importance of aware-ness in order to manage trade-offs of trusting technology and relieve issues from both sides of the coin.

Fig. 1

Impact layers

A matrix describing the relationships of

elements discussed in this chapter.

Highlighted, the intersections

that could lead to problematic consequences.

1.1 - Technology is dangerous, don’t trust the system

As new technologies develop and in particular digital te-chnologies that collect personal data and use information extracted from it, they raise several ethical issues due to the impact they can have on people and society.

Varisco (2019) formalised the impact layers (fig.2) that can affect an individual identity when using these technolo-gies and exploiting personal data:

The first layer affects awareness and impacts individual self-perception because of the information that the system returns to the user from its data as visualisation, feedback or insights. The second is the layer of action. It involves the behaviour and activities of the user and how proactive te-chnology and its reactions influence them. The third layer impacts interpersonal relationships of the user with its network because of the comparison with data of other peo-ple or the personalisation of social interactions. The fourth refers to social agency. It affects participation, contribution to society, and the mechanism by which user’s information

impact the way society develops. Of course, it is hard to isolate the impact on a single

layer. Impacts on outer layers have repercussions on the inner layers and impacts on the inner layers influence the agency a person could have on the information that affects outer layers. Technologies activate mechanisms that inter-sect more than one layer, impact them, and put them in mu-tual influence.

Feedback from the system in return of user action im-pact the judgement of it and increase self-awareness about that behaviour, as a result, will influence the behaviour of the user next time they will do that action or similar ones.

A person public image strongly influences their social interactions and ever more often, online information do this by getting other people to know each other in advan-ce through personal information. This virtual copy of the person made up of online personal information can also impact user self-perception if they mirror themselves into this data doppelganger (Reppel and Szmigin, 2011). These phenomena can be stressed by the possibility to compare personal data with data from other people, impacting judg-ment of others and themselves, their behaviour and social interactions. Also, personalisation of services, based on the doppelganger, strongly affects freedom of action and behaviour by showing or suggesting only certain features, that can also involve the way we interact with our network, and create bubbles (Bozdag, 2015) that also affects our self-perception.

This influence on social interaction and self-perception, if it becomes structured can affect society, by creating so-cial labels, new bias or empower old ones, based on the ca-tegories generated by online information and interactions. This social labelling then mutually influence relationships with others, behaviours and the perception of identity in relation to those labels. This influence on society can even mutate social values, based on the new social interactions and the judgment of labels. Data collected from people can drive the drafting of new policies, that regulates actions and social interactions.

It seems evident at this point how the use of personal in-formation in technology can impact individuals and society in a very pervasive and, possibly, powerful way.

Recommender systems belong to these technologies.

Recommender

systems impact

Page 15: Do you trust me? - POLITesi

1514

Part 1 - Literature review 1 - Legit risk or unjustified anxiety?

Fig. 2

Conspiracy theories

and technology

Impact layers on the individuals

and intersection mechanisms

(Varisco, 2019)

effects of the use of recommender systems raise a plethora of ethical issues that will be further discussed in the next chapter (see 2.5).

Recommender systems are not the only technological solutions that activate so many mechanisms of impact, and all of them raise several ethical discussions about the implications of different uses of personal information. It is essential to approach their use with critical sense to avoid useless exposure to risks that can become severe and be-come informed users, responsible designers and ethical companies to develop a good society thanks to these tech-nologies.

‘‘Paranoia’ is no longer simply a diagnostic label applied by psychologists and psychiatrists but has become a veri-table sociological phenomenon.” (Aupers, 2012)

Conspiracy theories have become part of the mainstre-am culture, from political events like the death of lady Dia-na, JFK or the supposed cover-up of the 9/11 attack to twin towers, to technological conspiracies like nano-chips put into vaccines for the ultimate control of the population, or the swine flu being bio-engineered in order to reduce world population and constitute a New World Order.

Anxiety about technological manipulation and control of the masses is not historically new, one of the first cases dates back to 1797 when James Thilly Matthews accused a parliamentary in London because he thought it was in-volved in the application of the “air loom”, a device able to use magnetic fluids and rays to directly control thoughts of those under its influence. At that time Matthews has been locked up in an asylum in confinement for over a decade, in the twentieth century he would have been recognised as an early case of paranoid schizophrenia (Shullenberger, 2019) and today academics condemn conspiracy theories, debunk them as an exotic anomaly and portray them as a threat to modern rationality. Conspiracy culture is both the

1.2 - Nobody’s spying on you, trust the system

They are at the base of personalisation for many services, providing tailored content and affecting the action layer by reducing the autonomy of the user. They base their sugge-stion on data doppelgangers and their comparison and sti-mulate the creation of feedback loops impacting the action layer towards particular preferences creating filter bubbles and bias that impact the awareness and relationship layers because of self-mirroring, and bias towards others. Perso-nalisation can also lead to social manipulation and impact the layer of social agency for individuals or groups. Possible

Page 16: Do you trust me? - POLITesi

1716

Part 1 - Literature review 1 - Legit risk or unjustified anxiety?

cause and effect of its normalisation, some people consi-der a sign of naivety trusting authorities and official sources and political scandals or “real” conspiracies like Watergate in the seventies, and more recently the Cambridge Analyti-ca scandal strengthens the credibility of other theories. During the last decades, conspiracy theories have evolved from a suspect towards others, external groups that could threat society, into theories about the “enemy within” (Gol-dberg, 2012), unknown and dark powers working within the framework of research institutions, corporate businesses, governance and the establishment (Aupers, 2012).

Contrarily, Aupers (2012) supports the idea that conspi-racy culture is a manifestation of distrust embedded in the cultural logic of modernity. Maybe because “science de-pends not [only] on the inductive accumulation of proofs but [also] on the methodological principle of doubt” (Giddens, 1997). Scepticism is part of the modern scientific methodo-logy and supported distrust in scientific knowledge.

Technology, as a manifestation of modernity, help this process in its way. While mass media, together with jour-nalism, are often distrusted as manipulative, some people consider the internet more democratic because it gives di-rect access to information. Internet became a platform for conspiracy theorists, that are usually prosumers, to decon-struct official knowledge, read, negotiate and rewrite hi-story as they develop an expanding patchwork of what “re-ally” happened (Aupers, 2012) based on the principle that everything is or can be connected (Knight, 2002). This is the narrative of The Matrix (Wachowski and Wachowski, 1999) for example, where the entire reality is a staged reality whe-re secret entities control our lives and “nothing is what it seems”.

This feeling of distrust applies to technological solutions as well, in what Khasawneh (2018a) defines as Technopho-bia:

“...an irrational fear and/or anxiety that individuals form as a response to a new stimulus that comes in the form of a technology that modifies and/or changes the individual’s normal or previous routine in performing a certain job/task. Individuals may display active, physical reactions (fear) such as avoidance and/or passive reactions (anxiety) such as di-stress or apprehension.”

Technophobia and

Technoparanoia

The study formalises this definition based on different factors that emerge from the results of the research, and that contribute to building up the central concept. In parti-cular, the first factor described is the most relevant for this discussion and is labelled Techno-paranoia:

“This study would define techno-paranoia as unjustified fear and mistrust that an individual form toward a technolo-gy that leads individuals to avoid that technology, their fear and avoidance of technology might not be supported with evidence or facts.”

It is crucial to notice how these definitions bring to the surface the issue of “avoidance” or, seen in other terms, te-chnology acceptance. Another study from the same author (Khasawneh, 2018b) highlights the effects of Technopho-bia and Emotional intelligence (Salovey and Mayer, 1990) towards technology acceptance and prove the negative im-pact of Technophobia on acceptance.

Whatsmore, Mordini (2007) argue how the development and the acceptance of technology not only depends on scientific and sociocultural matters but better from mea-nings that surround new technologies and the way society process their introduction. Modern technology is develo-ping without a cultural framework that gives it a sense other than mere utilitarian functionalities. Frightening narratives as the ones discussed earlier when talking about conspi-racy theories, become the primary way to include technolo-gy in a meaningful context and fear the primary emotion to integrate new concept in user mental models.

There are several examples of mismatch between user mental models and actual technology functionalities, and the impact of this mismatch goes from little frustration in the user experience of some digital products to cases of di-strust about the genuineness of some functions, up to fear and conspiracy theories. Is the case of the shuffle function of Spotify that nowadays is not anymore wholly random. Spotify developed this feature because the human brain is unable to perceive true randomness as random since it is “programmed” to find patterns even where there are not (like conspiracy theories based on the idea that everything is connected). Many users were complaining to the ser-vice that the shuffle feature was not working as it should

The influence

of narratives on

meanings

Perception mismatch

Page 17: Do you trust me? - POLITesi

1918

Part 1 - Literature review 1 - Legit risk or unjustified anxiety?

(Polácek, 2014). Another example affects recommender sy-stems and has more substantial consequences. If asked, most people had at least once the perception that someo-ne is spying their chats or the microphone of their devices is listening to them without permission because of some advertising received right after they talked about that spe-cific thing to someone else. This particular mismatch hap-pens because of the power of modern computation and the massive amount of personal data that those same people share with the web without even noticing, that is enough to predict their behaviour and interests to the point to give to them precisely what they are interested about in the nearly the exact moment in which they get interested (Amer and Noujaim, 2019).

To contrast feelings of vulnerability, meanings sur-rounding new technologies should exploit more pleasu-rable emotions rather than fear. One possibility to support pleasure for novelty is wonder, a natural human reaction towards complexity and richness of reality. Wonder can be useful thanks to his suggestive power, that can easily elicit respect, fear and fascination and even end up as a tool for control, for this reason, should be instead directed towards curiosity, that by promoting information sharing and encou-raging dialogue, divert this power towards democracy (Mor-dini, 2007).

Scientist, policymakers and technology developers in general (including designers), should not leave the privile-ge of building meaning around technologies to conspiracy theorists nor dystopian stories. However, they often reject narratives considering them naive or misleading. These nar-ratives allow to gather insights about what happens in peo-ple’s mind and are a useful tool for influencing people vision and acceptance of new technologies. By leveraging wonder and curiosity, positive narratives can surround technology with new meanings able to overcome fear and paranoia.

Positive narratives

1.3 - User awareness is key

People experience a sense of freedom when they be-come aware that they have (or can have) power over rea-lity. Whatsmore, the mediation of this power is delegated to knowledge and free will. Fear, on the other side, is also

Awareness and

security

Awareness and

privacy

able to become the perfect trigger for responsible actions (Tibaldeo, 2015). In order to avoid risks of digital technolo-gies (see 1.1) we must be aware of them and of the tools we have to protect from them. That same knowledge is neces-sary to be able to distinguish between real or unjustified ri-sks and keep a safe distance from technoparanoia and fear (see 1.2). Awareness is vital to achieve an optimal level of technology acceptance and allow people to take advantage of opportunities offered by new technologies, without expo-sing to useless dangers.

User’s lack of awareness about threats, the mismatch between the perceived risk and real risks, of risk-reduction measures and available protection mechanisms, is a si-gnificant objective for consumer protection and one of the biggest obstacles for technology acceptance (Najafi, 2012). A study made in Slovenia about the perception of cybercri-mes highlights that people are more aware of those risks exposed by news media rather that of those that will more probably threaten them. “Lack of understanding translates into inadequate security”, inform and educate user towards technologies and dangers they can encounter is a critical practice that has to become diffused and constant at all le-vel of society, so that “Internet users will know how to use this technology rationally and responsible, and will not be afraid of it” (Mesko and Bernik, 2011).

Another research made about privacy concerns and so-cial awareness confirms that privacy concerns impact ne-gatively on the acceptance of technology and that Internet literacy has the same inverse relationship on privacy con-cerns. The study demonstrates that technology-savvy users are prepared to “...deal with privacy-invasive technologies, customize their browsers or Internet applications, eliminate surreptitious software programs running on background, and keep up with the latest antivirus, antispam applications...”. Because of this knowledge, they can experience higher le-vels of control, resulting in much lower concern about their privacy, which in turn increase their technology acceptance. On the other side, social awareness (the level of awareness towards social and political processes undergoing techno-logy) has a positive impact on privacy concerns. The results of the study support the fact that awareness, raised con-sciousness, knowledge about technologies and their bene-fits and risks are fundamental factors for acceptance and

Page 18: Do you trust me? - POLITesi

20

Part 1 - Literature review

Unveiling the black box. How recommender systems work

2.1 - The need for filtering: overcoming information overload22

2.2 - A brief history of recommendation agents

2.4 - Present and future of RS: the application of Artificial Intelligence

25

35

2.3 - How they work? A taxonomical approach

2.5 - Stakeholders in recommendation and Ethics

27

37

At the end of the twentieth century (Goldberg et al., 1992) when the advent of big data and information overlo-

ad (Shenk, 1999) raised the need for better technology for information retrieval (Buckland, 2017), recommender sy-stems started to be developed.

Nowadays they are ubiquitous (Milano et al., 2019) and provide personalisation to people in so many different ser-vices in the field of entertainment, news, social media, e-commerce and so forth.

It is important to understand how they work and diffe-rentiate to be able to address the problem of opacity, lo-oking at the future of the technology to intercept opportuni-

21

voluntary use of those technologies (Dinev and Hart, 2005).From the same study emerge the importance of percei-

ved control, further highlighted by the study of Arcand et al. (2007) about the impact on users of reading privacy agree-ment statements. They show how awareness resulting from reading such statements increases the concerns about pri-vacy risks and reduce technology acceptance, but introdu-cing a certain level of control increases the results about confidence, trust and acceptance. The results of the study remark the importance of enabling user control in order to empower their awareness and increase their confidence with technology. The concept of control is further investi-gated in chapter 3.

Awareness is a crucial factor to tackle both sides of the coin. It makes people responsible for the risks and flaws they can encounter when using technology. On the other side is a powerful tool to direct this attention towards real problems, instead of waste effort on technoparanoia. Al-lows users to pursue opportunities by giving them a broader sense of control and accept new technologies with greater confidence.

Awareness is key

Control support

awareness

Page 19: Do you trust me? - POLITesi

2322

Part 1 - Literature review 2 - Unveiling the black box.

ties, and learn which stakeholders and issues are involved so to consider them during the design process.

“The First Law of Data SmogInformation, once rare and cherished like caviar, is now

plentiful and taken for granted like potatoes.” (Shenk, 1999, p. 27)

Right before the beginning of the millennium David Shenk (1999) in his homonymous book, exposed his con-cerns about the effects of information abundance as Data Smog.

With a realistic perspective, he also considered the in-credible benefit that humans would have had by the speed at which the information was generating, and he noticed how this phenomenon was a matter of decades, and not only of the first years from the advent of computers and the Internet.

The invention of writing gave humans the ability to re-cord and overcame the limits of time and space that were intrinsic to gesture and speech as a form of communication.

Printing multiplicated the ability to write and with co-pies increased the ability of writings to be distributed and be present in more than one place at the same time. Thus, “space” was even better addressed.

Telecommunication then arrived, after centuries of pe-ople on foot, horse or ship carrying information, giving the ability to distribute that same information at a longer di-stance by a surprisingly short amount of time.

All of these technologies enabled significant increases in the production of documents, and way before Shenk Data Smog people worried about “information flood” in the ni-neteenth century, and later in the twentieth about “infor-mation explosion”. Now, all of these phenomena is nothing compared to big data (Buckland, 2017).

Big data is data too big to be able to process it with tra-ditional computing models (Manyika et al., 2011), large da-tasets with a volume, speed and variety so extended to need specific computational methods to extract some knowle-

2.1 - The need for filtering: overcoming information overload

Information

technologies

Information overload

Evolution and

competition

Big data

dge from them (De Mauro et al., 2016). Today people can outsource computing needs to online services, shifting the web to a collaborative model, generating data continuously about themselves and others in so many different forms. (Bozdag, 2015).

IBM reported that in 2011 we produced every two days, the same amount of data that created in the entire human history before 2003 (IBM, 2011). In the same year, McKinsey stated that 30 billion pieces of content were shared every month contributing to the growth of the big data trend. (Manyika et al., 2011). From that time, internet traffic has increased up to 23 times between 2007 and 2017 and will at least triple again before the end of 2022, also thanks to 5G connectivity, reaching the incredible amount of 150 thou-sands of GB per second (Cisco, 2019).

There is a but, and Shenk states it soon in the book:

“When it comes to information, it turns out that one can have too much of a good thing. At a certain level of input, the law of diminishing returns takes effect; the glut of informa-tion no longer adds to our quality of life, but instead begins to cultivate stress, confusion, and even ignorance. Information overload threatens our ability to educate ourselves and lea-ves us more vulnerable as consumers and less cohesive as a society. If we’re going to make the most of this spectacular information revolution, we need to work hard to counteract these unintended consequences.”

(Shenk, 1999, p. 15)

Information overload is the backside of this incredible phenomena. Since the appearance of humankind, the three steps of communication have been more or less balanced; production, distribution and processing have been synced one to another. Now this balance has been broken, leaving our mind’s processing ability in deficit (Shenk, 1999).

Technology - or Technium as Kevin Kelly (2014) calls the “...global, massively interconnected system of technology… “ - evolves at a superior speed to respect of natural evolu-tion pace of humans. Our capabilities to process informa-tion become insufficient to confront changes, choices and challenges of modern life (Cialdini, 2009).

Data smog leaves us in the struggle of confusion and frustration (Shenk, 1999) and because our minds are bio-

Page 20: Do you trust me? - POLITesi

2524

Part 1 - Literature review 2 - Unveiling the black box.

logically limited, we get the feeling of being overwhelmed by the enormous number of choices and get to experience a sort of “bounded rationality” (Hilbert, 2012).

In his book, Shenk (1999) suggested to face Data Smog by being “our own filter”, but this was before the advent of “big data”.

Researchers in different fields, where decision-making is vital, such as economics (Chen, 2019) and game design (Bertolo and Mariani, 2014) are used to the concept of analy-sis paralysis, generated by an overabundance of options. The quality of decisions relates positively to the amount of information only up to a threshold (Bozdag, 2015). Past that point, if more information is pr, performance starts to de-crease (Eppler and Mengis, 2004).

This concept is also present in user experience design as a principle: Hick’s law (1952) states that reaction time has a logarithmic relationship with the number and complexity of options. The more the options, the bigger the time for the user to make a decision.

Even self-knowledge receive a psychological influence if the information is irrelevant or too detailed (Varisco, 2019). Make people worry about irrelevant knowledge could even lead to problems like control addiction (Mucko et al., 2005).

If we look at recent research in the field, we find that:

“The increasing amount of available data is raising que-stions about its usefulness, about the possibilities to extract valuable knowledge from it, and even concerns about privacy and surveillance.”

(Varisco, 2019, p. 202)

Problems come from technology, solutions as well can. Filtering information is a daily activity for everyone, and our brain eliminates redundant stimuli from our senses in a con-tinuous filtering process called “sensory gating” (Cromwell et al., 2008). However, reviewing every available piece of in-formation as we said would take too much effort or time, for this reason, we often delegate filtering to “gatekeepers” (e.g. journalists for news) (Bozdag, 2015) or we exploit tech-nological aids to help the retrieval of relevant information such as cataloguing in libraries (Buckland, 2017). Today we have access to technologies able to relieve cognitive overlo-ad by taking the role of “gatekeepers” or by helping people

Analysis paralysis

Impact

Gatekeepers

choose according to information and self-learning (Berman, 2016; Knight, 2017).

Thanks to personalisation algorithms systems can tailor the information on the user and artificial intelligence em-bedded in products and services collects data and is now able to address people’s needs (Varisco, 2019). Persona-lisation systems avoid overstimulation by recommending pieces of information customised on individuals (Bozdag, 2015).

This personalisation is possible for online services than-ks to user models created on the base of the knowledge that the system has on the user (Gauch et al., 2007) coming from the data shared explicitly or implicitly by the user itself with the system. A more detailed description of the creation and use of these models is provided later in this chapter.

Nowadays, some companies such as Google started to give the possibility to users to opt-out from personalisation to address privacy controversies. However, when people turn off filtering, they will soon be oppressed by information overload (Bozdag, 2015).

In history, people always referred to their peers or to experts to gather suggestions and help in decision making about commodities, entertainment, or as we were saying about “gatekeepers” (see 2.1) for news. (Çano and Morisio, 2017) As digital information and big data raised the problem of information overload recommender systems, the focus of this research, gradually got a share of this role.

The first experience with a similar kind of system appe-ars in 1992, at Xerox with a system called Tapestry (Gold-berg et al., 1992). Its purpose was to filter incoming emails and made use for the first time of a collaborative filtering solution, based on notations made by colleagues, similar to feedback mechanisms used in more recent services. They felt the need for this technology because of the increasing use of emails, and the will to avoid overload. In the descrip-tion of the system, concepts like content-based filtering (see 2.3) the selection of content based on common qua-lities between two items were already in use, and the term

Personalisation

The first

recommender

2.2 - A brief history of recommendation agents

Page 21: Do you trust me? - POLITesi

2726

Part 1 - Literature review 2 - Unveiling the black box.

collaborative filtering (see 2.3) was coined at that moment following the paradigm of word-of-mouth recommendation to enhance information retrieval. The innovation at the time consisted of considering the relationships between two or more documents to filter information. However, still, com-pared to highly automated systems we have today, it relied more on explicit feedback and more precise queries rather than on complex, proactive algorithms.

Fig. 3

The first appearance

of the term

recommender system

Interest from

academic research

A comic used by Xerox researchers

to describe different stages of filtering

ending with the new “collaborative filtering” concept.

(a) E-mail overload(b) Distribution lists

(c) Conventional filter(d) Collaborative filter(Goldberg et al., 1992)

Other experiences appeared, but we must wait five more years, in 1997 to find the first appearance of the term re-commender system in an article from Resnick and Varian (1997) where they recognise Xerox Tapestry (Fig. 3) as the first instance of this technology. They describe it as a deci-sion-making tool in addition to information retrieval, highli-ght some examples and define their most common features to identify them in this new category.

From that moment, the interest in recommender systems raised consistently and will grow significantly in the future

(Park et al., 2012). In 2016 Valdez et al. found more than nine thousand articles only in the scopus.com platform and now (December 2019) a similar query on the same platform gives more than twenty-two thousand results.

The topics of research evolve together with the techno-logy and now and then we find systematic literature reviews that track the work done in the past in order to suggest new paths to undertake.

In the last few years, besides the efforts on making more efficient algorithms, that still represents the majority of research, a growing base of studies have been made about more HCI related topics (Calero Valdez et al., 2016) and new more are coming.

The industry as well put much interest in this solution, and big companies in the digital services market invested a significant amount of money in supporting research both in-ternally and from academic communities (Bennett and Lan-ning, 2007). Some of the most prominent digital companies in the world like Amazon, Netflix, Spotify and Facebook rely on their recommendation engines for the success of their services, and most of the multimedia libraries/e-commer-ce/news applications make use of them.

Personalised content has become granted for the public to the point that, in order to give access to this complex te-chnology to medium and small businesses, in the last few years several open-source recommender systems have sprung together with a growing market of SaaS (software as a service) solutions (Afify et al., 2017).

The former is an overview of what happened to this so-lution from its birth until recent years. The analysis covers also expected developments later (see 2.4). Before that, it is essential to understand how recommender systems are differentiated and how they work.

Recommender systems are often black boxes producing recommendations like oracles, by focusing only on inputs and outputs and hiding the process in between. This cha-racteristic could induce users to misinformed conceptual models even if some of the models reflects some easily

Interest from the

industry

Recommender

systems SaaS

2.3 - How do they work? A taxonomical approach

Page 22: Do you trust me? - POLITesi

2928

Part 1 - Literature review 2 - Unveiling the black box.

understandable models, like social word-of-mouth recom-mendation (Herlocker et al., 2000).

In order to understand the functioning of these systems and acquire the elements to transform the black box into white ones, able to show and explain their behaviour, we must explore their complexity to learn which are their cha-racteristics and how they differ from each other.

Based on a framework for the general classification of recommender systems, this paragraph proposes an in-dep-th analysis of the classification of personalisation methods as the most common in the literature, and the one that bet-ter highlights the conceptual models behind a recommen-der system.

The framework for analysis and classification proposed by Manouselis and Costopoulou (2007) is a useful map-ping of elements that come into play in recommendation systems and even without a detailed explanation of all of them, it is enough to give an overview at the complexity of such systems.

This set of dimension has three main categories: the ra-tionale, the approach and the operation.

The rationale category deals with the goal of the system and has two dimensions:

Supported task refers to which user’s tasks the system supports. The values identified are:

- annotation in context; - find good items;- find all good items;- receive a sequence of items.

Decision problematic indicates the problem the system aims to support such as:

- choice;- sorting;- ranking;- description.

The approach category is the most complex and refers to the actual logic and characteristics of the system, accor-ding to most relevant approaches, it has three layers:

User model refers to how the system represents, gene-rates and updates the user profile.

Representation is the Method used to define the user. Some of them are:

- history-based models;- vector space models;- semantic networks;- associative networks;- classifier-based models;- user-items rating matrixes;- demographic features;- ontologies.

Generation refers to the characteristics that relate to the creation of the initial user model that may be:

- empty;- manually provided by the user;- completed according to stereotypes the user belongs;- generated through a set of training examples proposed

to the user;

and eventually how the system learns this model from collected data through:

- machine-learning techniques;- clustering techniques;- classification techniques.

Update of the model, if present, is represented by the method used:

- explicit;- implicit;- hybrid.

and the techniques involved;- manual updating;- interaction;- gradual forgetting of outdated information;- natural selection techniques;- time-specific intervals.

Domain model, similar to the user model, refers to the properties of the items recommended.

Mapping

recommender sytems

complexity

Rationale

Approach

Page 23: Do you trust me? - POLITesi

3130

Part 1 - Literature review 2 - Unveiling the black box.

Methods of Representation include:- indexing or listing (non-hierarchical);- taxonomy (hierarchical classes);- ontology (complex relationships).

Generation often comes from techniques beyond the scope of the recommender system, but sometimes some techniques are applied to generate the appropriate repre-sentation such as:

- association rule mining;- clustering;- classification;- dimensionality reduction.

Personalisation layer is about the dimensions depicting the way recommendations are provided.

Degree of personalisation can increase:- non-personalised (same for all users);- ephemeral (immediate short-term interests);- persistent (long-term interests).

Method is the parameter that we will analyse in-depth later in this chapter because it is the one commonly in use to classify recommender systems. The list of methods is:

- raw retrieval of elements from queries;- manual selections from experts, opinion-leaders or

other;- content-based;- collaborative filtering;- hybrid;- demography-based;- knowledge-based;- community-based. Algorithms are the actual mathematical logic, the engine

of the system, can be of different types:- model-based;- memory-based;- heuristic-based;- instance-based;- hybrid.

and make use of different techniques:- attribute-based;- item-to-item;- user-to-user.

Outputs of the recommendation can be of three main typologies:

- suggestions;- ratings/reviews;- predictions.

The operation category is the one relative to the actual deployment of the system and its dimensions are:

Architecture of the system can be:- centralised (all in one location);- distributed (more locations, peer-to-peer).

Location refers to the place where the recommendation is produced and delivered:

- at information source; - at recommendation server;- on the user side.

Mode represents who initiates the process:- active push (recommendation actively pushed even

when the user is not interacting);- active pull (the user actively allow or explicitly request

a recommendation);- passive (recommendation are part of the regular sy-

stem operations).

As previously stated, the most common classification (fig. 4) is the one based on methods of personalisation in the approach category (Burke, 2002; Çano and Morisio, 2017; Isinkaye et al., 2015; Manouselis and Costopoulou, 2007; Prasad and Kumari, 2012; Ricci, 2015; Sinha and Dhanalak-shmi, 2019). Describe the main instances of this classifica-tion will give a clear perspective on the conceptual models that reflect the behaviour of recommender systems. This analysis will be useful in order to understand which are the concepts that need to be extracted from the black box to

Operation

Recommender

systems

classification

Page 24: Do you trust me? - POLITesi

3332

Part 1 - Literature review 2 - Unveiling the black box.

increase user awareness and frame the context of action for the research later on.

- Transparency of results: Justification (see 6.2) is easily guaranteed by making the user aware of the features used to generate recommendations.

Cons- Insufficient diversity and novelty: this kind of systems

are keen to over-specialisation, and are the primary cause of filter-bubbles (Bozdag, 2015).

- Bounded content analysis: content-based filtering re-quires descriptive data for successful usage. If an item does not have enough information, giving a more precise recom-mendation list is hard, and some attributes can be inaccu-rate.

(Isinkaye et al., 2015; Sinha and Dhanalakshmi, 2019)

Collaborative approach evokes personal word-of-mouth recommendation (Herlocker et al., 2000). If “user A” and “user B” are found to have similar preferences, items re-commended to each of them are the ones well rated by the other or other similar users. In opposition to content-based, these systems are domain-independent (Isinkaye et al., 2015): they do not require any information about the item as all the system relies on user preferences and the simila-rities between users.

Pros- Serendipity: this approach can produce variable and

unexpected recommendations. This method is less prone to place users into filter-bubbles.

- Community: it shows dramatically higher performan-ces in case of large user spaces.

- It does not require information about the domain at the beginning.

Cons- Cold-start: lack of information about new users affects

accuracy at the beginning, and lack of interactions with new items tend to make it difficult for them to enter recommen-dation lists and reduce their discovery.

- Scalability: because of the algorithms on which they are based, in case of high dimensional datasets, the system becomes very complex and demanding to maintain.

(Sinha and Dhanalakshmi, 2019)

Fig. 4

Content-based

Map of recommender systems classification

(Sinha and Dhanalakshmi, 2019)

Content-based systems learns and recommends items similar to the ones the user already liked in the past (Ricci, 2015). This approach is domain-dependent as it focuses on the attributes of the items in order to generate recommen-dations. The system generates user profiles using features extracted from items the user evaluated (explicitly or impli-citly) in the past then items with a similar set of features are recommended. This approach is very successful with items that have information-rich content or complete metadata (Isinkaye et al., 2015).

Pros- Independent user: these systems do not need data

from other users to produce recommendations, personali-sation is guaranteed only by items features and user profile. This method also avoids the need to share user information with other users, ensuring privacy.

- Avoid “cold-start”: content-based systems can recom-mend any new item to a user despite the lack of previous ratings or interactions with the item.

Collaborative filtering

Page 25: Do you trust me? - POLITesi

3534

Part 1 - Literature review 2 - Unveiling the black box.

By mixing approaches, hybridisation can keep advanta-ges of the combination of methods and reduce or take out problems (Çano and Morisio, 2017). Hybrids can be made out of different combinations of two or more approaches and their variations and can become very complex and expensive to implement (Sinha and Dhanalakshmi, 2019).

There are different techniques for hybridisation:

Weighted The system combines ratings of different recommendation methods with different weights.

Switched The system switches between methods depending on the situation.

Mixed The system shows recommendations from different tech-niques together.

Feature combination Characteristics of different recommender systems are put together into a single recommendation algorithm.

Cascade A system refines recommendations coming from another system.

Feature augmentation The input of the system is the output of another system.

Meta-level The model learned from a recommender is used as an input feature for another one.(Prasad and Kumari, 2012)

Less common techniques also include:

Demography based The system groups users by demographic data such as gen-der, age, location, and other demographics.

Knowledge-basedIt makes use of knowledge about user and item to decide the item which will fulfil user needs.

Community-basedIt creates a community that shares common interests, re-commends items after aggregating the decision obtained from the community during a user-item interaction.

The next step is to look at what comes next in the rese-arch about recommender systems, to understand how this complexity could change in the near future and be able to consider further developments and turn them into opportu-nities rather than obstacles.

Similarly to other computing fields, information retrie-val (Buckland, 2017) has reached an innovation momentum thanks to the advent of Artificial Intelligence concepts and algorithms. This innovation opened up several directions for the development of recommendation and personalisation in general. Netflix puts much effort in research and develop-ment (see 2.2), and over 80% of its business comes from recommendation (Basilico, 2019). Its perspective on future research trends highlights the following topics.

Deep learning algorithms are made of multiple proces-sing layers with different levels of abstraction and can di-scover intricate structure in large data sets (LeCun et al., 2015). Even if this technology is in the spotlight since 2012, it reached the interest of the recommendation system re-search only in 2017 (Basilico, 2019). This delay happens not only because the algorithms are not always more effi-cient than the standard ones, but they also rise problems for scalability of the system or (an interesting point for this research) they aggravate the black-box model and compli-cate the possibility to make explainable recommendations (Zhang et al., 2019). Although the use of deep learning is not a valuable solution yet, it is an excellent road to follow and has already opened up to new possibilities such as leverage of other data (Covington et al., 2016) and contextual or ti-

Hybrid systems

Other techniques

Deep learning

2.4 - Present and future of RS: the application of Artificial Intelligence

Page 26: Do you trust me? - POLITesi

3736

Part 1 - Literature review 2 - Unveiling the black box.

me-sensitive sequence prediction (Basilico, 2019).Most of the algorithms in use take advantage of correla-

tion (Basilico, 2019) to make assumptions about preferen-ces. Although, with this approach, it is hard to understand why a user behaves in a particular way. Are their choices made because of their preferences, or just because so-mething was recommended to them by the system? The ability to implement causality in the system and under-stand the cause-effect relationships of actions could make a big difference in the structure and effects of feedback lo-ops (Varisco, 2019) reduce cases of filter bubbles (Bozdag, 2015) and help to train models able to debiasing recommen-dations.

Some services are now using Bandits algorithms (Lu et al., 2010) to test and deliver contextual personalisation with success. These algorithms can test new different solutions and maximise their effectiveness over time to respect of classic A/B testing (Vieler-Porter, 2019).

However, bandits focus on short-term rewards and uni-que session personalisation and recommendations instead aim at long-term effects to maximise user-satisfaction (Ba-silico, 2019). In order to focus on long-term solutions, it is necessary to introduce another artificial intelligence con-cept, reinforcement learning (Littman, 1994), that introdu-ces significant challenges for development into a vast and dynamic space like the ones of recommendation systems with the vast data sets involved nowadays.

One of the most exciting trends for design is experien-ce personalisation. Meaning that personalisation will cover not only content (the “what”) but also the way (the “how”) it is recommended is going to be personalised. The overall user experience, the interface, the layout of pages and the information used to describe a piece of content, even the degree of interaction will be contextual and personalised. The algorithm used to provide recommendations will be personalised and dynamically adjusted on the user’s ove-rall profile, on the behaviour of the single session and con-textual information. The user interface and all the element used to convey information as well as the content itself will adapt to the user and the situation with different amount of information, meta-data, position and navigation (Wu et al., 2016). The interaction will be personalised to meet the needs, expertise and personality of the user from the most

Correlation

Bandits

Experience

personalisation

passive one, that takes what is given, to the power-user that wants control over everything (Basilico, 2019).

One more direction for research is the ability to provide valid recommendations to a group of users. The challenge is taking into consideration not only the sum of the prefe-rences of the individuals by which the group is made but also social and behavioural aspects of the group and the network of relations between individuals (Dara et al., 2019).

The more people delegate information retrieval to re-commendations and trust personalisation, the more re-commendations will have a considerable impact on their li-ves (see 2.5). It is crucial that this process is fair in so many ways, starting from the calibration of the results to avoid exaggeration of the dominant interest and that all the pre-ferences of the user are proportionally represented (Steck, 2018). However, this is not sufficient. The process has to consider all stakeholders and tackle a variety of ethical is-sues.

Commonly, recommender systems research focuses on end-users, their needs and interests. This attention is logical if we consider that users are one of the most rele-vant stakeholders in the majority of recommender systems (Abdollahpouri and Burke, 2019). However, they are not the only ones involved.

Needs and interests of other stakeholders are often at stake as well. Examples of recommender systems that in-corporate objectives and preferences of different stakehol-ders can are present in the literature referred to as mul-ti-stakeholder recommendation (Burke et al., 2016).

This kind of recommender systems can have three, non-exclusive classes (Abdollahpouri and Burke, 2019):

- multi-receiver;- multi-provider;- with side stakeholder.

Group recommendations mentioned in the previous pa-ragraph (2.4) are a clear example of a multi-receiver system, that needs to address the preferences of a group of peo-

Group

recommendations

Multi-stakeholder

recommendation

2.5 - Stakeholders in recommendation and Ethics

Multi-receiver

systems

Page 27: Do you trust me? - POLITesi

3938

Part 1 - Literature review 2 - Unveiling the black box.

ple, like for example a group of friends that wants to wa-tch a movie together, referred to as “homogeneous” group of multi-receivers. Opposite, a recommender for different receivers with different needs could be the case of an edu-cation system to suggest courses to students. Among re-ceiving stakeholders, we find students, that need to find a course they like, but also parents are involved in such deci-sions, and their needs must be taken into account as well.

Another kind of multi-receiver systems is reciprocal re-commendations. For example, this is the situation of dating apps (homogeneous) or job searches (heterogeneous) whe-re the two users should match together, and both of their needs must be satisfied.

Multi-provider situations include all those systems that aggregate items from different sources, like housing in AirB-nB or restaurants in food delivery services. Different parties provide the content, and the recommender system must be able to treat all of them equally. In case it is interested in reaching only a specific target of the entire audience, this kind of system can include the provider preferences, or in case the provider preferences are absent the system should just be sure to give a fair exposure to all the providers.

Services are not always binary systems where there are only a provider and a receiver but rather complex environ-ments that involve different roles with different needs that can be affected by given recommendations. For example, drivers in delivery services are affected by which restaurant the system recommends to which user. This recommenda-tion influences their routes and also the distribution of wor-kload among different drivers.

Value-aware recommendation is the case in which the recommender platform itself is one of the side stakeholders took into consideration. It can often have needs of profit maximisation, long-tail promotion or other business-orien-ted goals and preferences.

Understand how to include and prioritise the preferen-ces of each stakeholder is a big challenge for such systems and the balance of their needs raises several issues of fair-ness of the results among other ethical issues that affect recommender systems in general.

An analysis made by Milano et al. (2019) reveals the fact that user-centred approaches do not consider the interests of other stakeholders in assessing their ethical impacts.

Multi-provider

systems

Systems with side

stakeholders

Value-aware

recommendation

Ethics

Paraschakis (2017) developed a framework to provide a holistic view of the ethical challenges of recommender sy-stems and indicates severe consequences if an ethical code will not be followed in the development of these systems.

Milano et al. (2019) address the gap highlighted in the literature by providing an exhaustive taxonomy of such problems that will be useful to consider in order to inclu-de the discussion about stakeholders and ethics into more user-centred research.

An operation made by a recommender system can raise an ethical problem when it:

- impacts negatively on utility for one of its stakeholders- violates their rightsand this impact can:- have immediate effect- expose the stakeholder to future risks.Early studies focus more on the content of recommen-

dations rather than explicitly on the systems and rise the need to provide ethical content to the users. For example, avoid that a family-shared movie recommendation account suggests a movie rated “PG-13” to children based on the preferences of their parents; or provides content offensive to minorities, or content morally unacceptable by a specific culture (Tang and Winoto, 2016).

Later, focusing more on the recommender system itself, Paraschakis (2018, 2017, 2016) highlights five problematic areas:

- user profiling- data publishing- algorithm design- user interface design- online experimentation (A/B testing)and lists possible risks:- breaches in the user’s privacy- anonymity breaches- behaviour manipulation- bias in recommendations- censorship- unequal treatment in tests.

Privacy is one of the first challenges that arise from re-commender systems. Most of the commercially successful applications are based on collaborative or hybrid approa-

Privacy

Page 28: Do you trust me? - POLITesi

4140

Part 1 - Literature review 2 - Unveiling the black box.

ches and build user profiles based on vast amounts of col-lected data (see 2.3). Risks can occur in different stages:

- data collected or shared without consent- leaks of data sets or attempts of de-anonymisation- inferences that the system can draw from data- inferences drawn from other user’s profiles (collabora-

tive filtering)these problems have been tackled in 3 different ways:- architecturally (decentralised systems)- algorithmically (cryptography)- by policies (like GDPR legislation)User-centred approaches have also been suggested to

solve privacy issues by implementing privacy controls and the possibility to “opt-out”. However, it is possible to make inferences about the user also from metadata about privacy preferences. In the end, due to the nature of recommender systems themselves, the issue of user privacy seems to surrender to the likely trade-off between privacy and accu-racy, as their operations rely heavily upon user data.

By working as decision aids, recommender systems influence and affect users autonomy by nudging them towards a particular direction, addicting them to a certain kind of content, or by limiting the exposure to diversity and reducing users’ options.

These operations can go from being benign and support decision making by filtering irrelevant options and enabling individual agency, to have a questionable behaviour if they tend to persuade or nudge a user towards certain types of items, to malign conduct like being manipulative by levera-ging users biases or force them in coercive ways (Burr et al., 2018).

The way people experience their identity is strongly af-fected by the categories they are assigned to (de Vries, 2010). User profiling of recommender systems could dama-ge this experience in two ways. First, it can be affected by biases and try to track a user behaviour down to preconfigu-red categories or demographics the user does not identify with, rather than adjust dynamically and contribute to the definition of the user’s own identity. Second, the system may operate upon categories that have not recognisable attributes for the user or be machine-generated categories that do not correspond to know social representations) this makes impossible for the user to self-identify in the cate-

User autonomy

Personal identity

gorisation.Opacity is the problem that arises from a lack of tran-

sparency in the system. Explain or justify to the user why and how a recommendation was generated, which is the conceptual model of the system, describe their profile and how it generates is a hard challenge and can create issues about autonomy. Privacy issues also hinder transparency. In collaborative filtering, for example, it is hard to guaran-tee trust in the recommendations coming from other users’ similarities without the possibility to share other users in-formation. Transparency can even trigger negative implica-tions for the diversity of options and fairness of competition like in a scenario where the most popular item gets recom-mended for its popularity, increasing its desirability in a self-reinforcing feedback loop.

Fairness definition in the context of recommender sy-stems often identifies with the issue of reproducing social biases (Milano et al., 2019). The two primary sources of this issue in recommender systems are the observation bias ge-nerated by feedback loops and population imbalance (bia-ses that result from the reflection of existing social biases towards some groups).

If we consider fairness from a multi-sided perspective (Abdollahpouri and Burke, 2019), we must consider fairness issues coming from all the stakeholders and be able to ba-lance requirements coming from all of them.

Polarisation and social manipulation have demonstrated to be possible through recommender systems. The issue of “filter bubbles” (Bozdag, 2015) is widely discussed, espe-cially towards news and social media filters. They can iso-late users from exposure to different viewpoints and dama-ge political debate. Systems must also be protected from active manipulation coming from active groups of users that can trigger intense positive feedbacks towards specific items. These systems can also be exploited for political pro-paganda as demonstrated by Cambridge Analytica scandal concerning interferences with the US political elections (Amer and Noujaim, 2019).

Recommender Systems are complex environments whe-re stakeholders, processes, approaches, and ethical issues come into play all together. They set a variety of challenges for research and raise a plethora of issues to consider while doing this. This research aims to unveil even a little bit more

Opacity

Fairness

Filter bubbles,

polarisation and

social manipulation

Challenge complexity

Page 29: Do you trust me? - POLITesi

42

Part 1 - Literature review

Do youtrust me?User experience in recommender systems

3.1 - Beyond algorithms: user-centric evaluations 44

3.2 - Trustful relationships last longer

3.4 - User control

48

52

3.3 - Dialogue for trust: self-disclosure and reciprocity

3.5 - Recommender transparency

50

54

Even though most of the research usually focused on te-chnical development and algorithmic accuracy, recom-

mender systems are much more than that (Calero Valdez et al., 2016). They help people retrieve relevant information, and the user experience is essential to evaluate a good sy-stem. Starting from an established user-centred evalua-tion model for recommender systems (Pu et al., 2011), in this chapter are described the main concepts for the defi-nition of a new, focused, model that summarises the main idea of this thesis and introduce concepts, results and pre-vious work made on the user experience of recommender systems. The main focus is on the concept of trust. Then a

43

of this complexity to the everyday user. To achieve this goal is necessary to understand which are the main user expe-rience dimensions to keep in mind while designing such sy-stems.

Page 30: Do you trust me? - POLITesi

4544

Part 1 - Literature review 3 - Do you trust me?

new concept of dialogue is discussed. In conclusion, control and transparency are suggested as the foundational factors to achieve this kind of experience.

For a long time since the creation of the first recommen-der system, most of the research about them focused on algorithmic issues (Calero Valdez et al., 2016).

3.1 - Beyond algorithms: user-centric evaluations

Fig. 5

The relative frequency of terms from “author

keywords” for each year in recommender systems publications (Calero Valdez et al.,

2016).Revised by clustering

HCI terms vs Algorithmic terms

More recently researchers moved some interests towards HCI related topics because accuracy metrics are not enough (McNee et al., 2006) and started to evaluate them considering user perceptions and user experiences related issues (Pu et al., 2011).

Trust is one of the most discussed and considered va-lues in this kind of user-centric evaluations and is also the target of this research.

Algorithmic issues and evaluation based on accuracy have been traditionally the metric of evaluation for recom-mender algorithms and is critical to understand that this is not in opposition to user-centric discussions and it actually contributes to the effectiveness of the system, and increa-se the usefulness perceived from the user, by creating value for him, and ultimately feeds trust (Pu et al., 2011).

Considering that recommender systems are information retrieval agents, we can assume that their effectiveness is associated with the relevance of the recommended items.

Relevance is a central concept of information retrieval measured by two different values: precision and recall.

Precision can be associated with accuracy and is a me-asure of purity, expressed as the proportion of the docu-ments in a retrieved set that is relevant to the query (did the retrieved set include only relevant documents or also some non-relevant retrieved in error?). Recall can be as-sociated with another important concept of recommender systems that is diversity and is a measure of completeness, expressed as the percentage of the relevant documents in a collection that were found by the retrieval system in re-sponse to a query (where all relevant documents retrieved?) (Buckland, 2017).

Accuracy and diversity are essential characteristics to balance in a recommender system in order to accommodate the behaviour of the user whether they are keen to a speci-fic piece of content or they rather want to explore in search of discoveries (Steck et al., 2015).

Apart from the technical aspects described in chapter 2, these two characteristics are the only technical-rela-ted concepts considered because of the impact they have on user-experience and trust in the evaluation framework considered for this discussion (Pu et al., 2011). They will not be further discussed tough, because their optimisation de-pends on the optimisation of algorithms and is beyond the boundaries of user experience design.

With the attention moved towards HCI topics, a lot of experience related concepts sprouted in the recommender system bibliography. We already mentioned trust, further discussed later. The two other important concepts for the discussion are transparency that is the ability of the system to show or explain to the user its inner logics and control, the degree of interaction that the system allows to the user. We can find in the literature more algorithm-related goals, as well as accuracy and diversity, like serendipity that is the ability of the recommendations to surprise the user by being relevant but unexpected or novelty, that is the abili-ty of the system to recommend items that are new to the user (Ricci, 2015). Other user-related concepts instead, like transparency and control, are scrutability that is the possi-

Information retrieval:

relevance, precision

and recall

User experience

evaluations

Page 31: Do you trust me? - POLITesi

4746

Part 1 - Literature review 3 - Do you trust me?

bility for the user to tell the system that it made a mistake or satisfaction that represents ease of use and enjoyment of the user (Tintarev and Masthoff, 2007).

Some evaluation frameworks have demonstrated rela-tionships and interdependencies of such concepts. For this discussion that introduces all the main concepts that drive the research, the model is based on ResQue (Recommender system Quality of user experience) the user-centric evalua-tion framework (fig.6) developed by Pu et al. (2011).

The resulting model (Fig.7) focuses on the trust node and isolates only the concepts early described that are the ones shown to influence trust. On one side of the model, there are accuracy and diversity that foster trust by increasing usefulness. On the other side, there are transparency and control. To mediate their influence on trust, two more con-cepts added to the model, are a contribution to the model and the key concepts for this research.

ResQue

Fig. 6

Fig. 7

Fig. 8

ResQue model describes relations

between user centric evaluations

of a recommender systems (Pu et al.,

2011)

The model derived by isolating trust and the nodes that have an influence on it.

The part of the model discussed in this chapter, that synthesises the main discussion of this research.

Dialogue is a formalisation of the interdependency between transparency and control. The former represents the part of conversation coming from the system, while the latter the one from the user. Awareness mediates the pas-sage from dialogue to trust, and its importance has been broadly discussed in chapter 1.

A new model: trust

through dialogue

As previously stated, the upper part of the model, that concerns algorithm optimisation, will not be considered be-cause it is beyond the responsibility of design. The resulting model is the following (fig.8).

Starting from trust and going backwards, this chapter continues with the discussion of each of these concepts.

Page 32: Do you trust me? - POLITesi

4948

Part 1 - Literature review 3 - Do you trust me?

3.2 - Trustful relationships last longer

“Trust is the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.”

(Mayer et al., 1995, p. 712)

Mayer et al. (1995) developed a model (Fig.9) describing this definition, showing the elements of the system, the re-lationship between the characteristics of the trustor (the party that trust) and the trustee (the party that is trusted), the element of risk-taking and results of feedback loops on the trustor perception of the trustee.

The main characteristic of the trustor is an inherent trait defined as “propensity to trust” and represents the willin-gness to trust, or better, the propensity to trust before ha-ving any information about the trustee. This factor is mainly dependent on developmental experiences, personality types and cultural background of the trustor.

Model of trust

Fig.9

The model of trust proposed by Mayer et

al. (1995)

The main characteristic of the trustee is trustworthiness a quality that is composed of three factors:

- the ability of the trustee to have a valuable impact on the specific domain based on their skills and competences;

- the benevolence, or how much the trustee is conside-red to act in favour of the trustor in contrast to an egocen-tric profit and;

- the integrity of the values and the set of principles that the trustee follows and how acceptable the trustor finds these principles.

The importance of trust is discussed in the first chapter talking about technology in general and particularly for its acceptance. When focusing on recommender systems spe-cifically, some authors propose trust models or definitions that mostly reflect the general definition from Mayer. Often ability (sometimes mentioned as competence), benevolence and integrity are fundamental parts of these models (Nila-shi et al., 2016; Pu and Chen, 2006). The model proposed by Pu and Chen (2006) has three main components: system fe-atures, trustworthiness, and trusting intentions. Trustwor-thiness follows the definition of Mayer but also includes the reputation of the platform. System features are the design features of the system that support its trustworthiness and belong to three categories: interface display techniques (transparency - Ed.), the algorithm used (recommendation quality - Ed.) and user-system interaction (control - Ed.). Trusting intentions include the behaviours and expecta-tions of the user towards the trusted system once trust is established, like the intention to return to the platform or the store, the intention to follow, purchase or play recom-mended items. This concept of a return to the product/system/platform is also widely shared by the recommen-der research community. Ricci (2015), for example, assu-mes that “...trust in the system is correlated with repeated users, as users who trust the system will return to it when performing future tasks”. Later, in the definition of the Re-sQue model, Pu and her colleagues (2011) cite the work of Grabner-Kräuter and Kaluscha (2003) about on-line Trust to reaffirm that “...consumer trust is positively associated with their intentions to transact, purchase a product, and return to the website”. The concept of return is a relevant bi-product of trust that is loyalty (Ricci, 2015). Trustful re-

Trust in

recommender

systems

Page 33: Do you trust me? - POLITesi

5150

Part 1 - Literature review 3 - Do you trust me?

lationships last longer. Trust supports loyalty to the system and from a business perspective is insurance on revenue streams, that also add an economic justification to the at-tention that user-experience research on recommender sy-stems puts on the value of trust.

As seen in the model of the ResQue evaluation, transpa-rency foster trust and is affected by control. The analysis (see Appendix A) of several interactive recommender sy-stems (He et al., 2016) demonstrates how user-control and tinkering with the system can support the comprehension of the inner logic of the recommendation process. The link that is missing though is the reciprocal influence that tran-sparency has on control: if the system explains and unveil the way it works, correctly, the user will have an eased experience in exploiting all the functionalities of the sy-stem. This interdependency of the two concepts is not well explicated in the model and is essential to lay the founda-tion of dialogue. In a conversation between the system and the user, transparency is the channel of communication of the system while control is a logical channel for the user to express himself actively and explicitly. From a human-com-puter interaction perspective, we could make a parallel to a classic matrix of man-machine interaction (Fig. 10), with a dynamic system, also known as feedback loop (Dubberly et al., 2015).

Another couple of concepts that contribute to describing and defining this dynamic emerge from the research of Lee and Choi (2017) on the user experience of conversational agents. These concepts are self-disclosure and reciprocity.

Self-disclosure is a fundamental behaviour for develo-ping and maintaining relationships, the level of intimacy of the information disclosed is positively related to the appre-ciation induced in the listener and the opportunity to disclo-se also induce the disclosing party to appreciate the liste-ner. This ambivalence of disclosure leads to the importance of reciprocity. The successful development of a relationship depends on an even and balanced disclosure of both parties

3.3 - Dialogue for trust: self-disclosure and reciprocity

Introducing dialogue

Self-disclosure and

reciprocity

Fig. 10

Dialoguing

recommender

systems

The “feedback loop” is a good representation of the stream of information between the user and the system. In the particular case, the efficacy of “displays” of a recommender system is defined by the level of transparency. (Dubberly et al., 2015)

during a conversation or, in the particular case, during the interaction.

The study demonstrates the influence on trust of these two concepts, but reciprocity had a much more significant impact than self-disclosure. Whatsmore, the relationship between disclosure, intimacy and user satisfaction was not confirmed (Lee and Choi, 2017). For these reasons, it is possible to assume that in order to increase trust towards a conversational technology (trough dialogue - Ed.) is way more relevant to guarantee a balance in the exchange of information during the interaction (reciprocity) rather than enable the user to disclose more intimately with the system.

Experiences and researches on dialoguing recommender systems have been developed and tested since the early years of this technology (Bridge, 2002; Burke et al., 1997; Goker et al., 2004; Warnesta, 2005). They were developed by imitating the interaction with a salesclerk/advisor with natural language dialogue and used to narrow a set of items towards the retrieval of the desired one. This process is made by asking questions about features to filter results or by proposing solutions, and filter based on inferences made on the selection or feedback of the user to the proposed items. This kind of interaction can naturally guide the re-

Page 34: Do you trust me? - POLITesi

5352

Part 1 - Literature review 3 - Do you trust me?

trieval of information for the user and allow them to critique the recommendations. This process also allows to devise user’s preferences over short and long-term interactions and include them to the rationale of the recommender for a personalised experience (Chen and Pu, 2012). Ricci (2015) infers from his research findings that when recommender systems ask questions, they assume a social role, valuable for users, that can perceive that the system is interested in their preferences. The ability of the system to provide valuable feedback also improves when the number of que-stions increases. Moreover, he highlights the importance of explicating intentions during the interaction.

Nowadays, beyond the positive influence on trust, re-search about dialogue with recommendation systems (and technology in general) turns from important to necessary because of the diffusion of conversational agents, vocal as-sistants and voice interfaces (Lee and Choi, 2017).

After the introduction of the concept of dialogue, it is crucial to look at the foundational elements of transparen-cy and control to be able to address the model at its base, showing evidence about how they affect trust.

User control is a fundamental component of interacti-ve systems. It is the third of the ten fundamental usability heuristics drawn up by Jacob Nielsen (1994) and encourage the designer to guarantee the freedom to steer away from undesired states of the system.

In the case of recommender systems, different user in-terventions can occur at any point in the process (He et al., 2016), and we can focus on three particular “moments” of it.

During the input moment (preference elicitation) besi-de the widespread use of implicit feedback (see 2.3), one could let people free to express their preferences explicitly. By showing them their profile and providing some features to edit their preferences, users not only earn a more acti-ve behaviour but start to seek transparency as they get to acknowledge data collected about them and feel the need to understand how is collected. They also appreciate the reciprocity generated by this practice has they recognise that data and information created by themselves are not

Conversational

interfaces

Moments of control:

input,process,output

3.4 - User control

exploited exclusively by the system but return some value to them. Users can use this information to improve them-selves and employ preference controls to set long term go-als in order to express not only manifested preferences but also desires and ambitions.

During the process moment (algorithmic computation) users could intervene in the algorithm parameters to tune the weight of some specific logic or the system could pro-vide different types of algorithms for the user so they can switch between them based on their needs. Have direct ac-cess to the process of recommendation is very powerful and can have a significant impact on the results. Although, ma-nipulate such complexity requires a high level of transpa-rency to be made with some awareness and brings difficul-ties to balance information overload and cognitive load. It is a more efficient tool for eventual breaks on filter bubbles and exploring items that are outside ordinary preferences.

During the output moment (presentation of the results) items can be filtered, rearranged, sorted or rated. This acti-vity can be helpful to make faster choices, in particular, in contextual situations when users have clear in their mind their needs (Harambam et al., 2019).

User control on dynamic interfaces that are able to react with immediate feedback on the effects of users control-led changes supports tinkering with the black-box (Budiu, 2018). Tinkering helps users to understand the system bet-ter but also make them feel they are in control by making explicit the relationship between input and output, increa-sing their involvement, system transparency and also their ability to revise those same parameters they can control, in order to receive better recommendations (He et al., 2016; Ricci, 2015; Schafer, 2005).

Even if this aim at recommendation accuracy probably is the primary intent of control (Pu et al., 2012) previous re-search shows positive effects on user-related concepts (He et al., 2016) like transparency and decision quality (Ricci, 2015), satisfaction (He et al., 2016; McNee et al., 2003; West et al., 1999), loyalty (McNee et al., 2003; Pereira, 2000) and foremost trust (Komiak et al., 2004; Wang, 2005; West et al., 1999).

Being information retrieval aids, the power of recom-mender systems reside in their ability to reduce the co-gnitive load of the user (Konstan and Riedl, 2012). Dealing

Tinkering

Control and user

experience

Cognitive load and

flexibility

Page 35: Do you trust me? - POLITesi

5554

Part 1 - Literature review 3 - Do you trust me?

with control in such systems is a challenge. As for similar domains, only some experts users could have the patience, skills or interest to interact and edit complicated settings or controls (Harambam et al., 2019). It is necessary to achieve a balance between a rich control and an acceptable effort for the user (Jin et al., 2018). However, as previous work de-monstrates, users are more satisfied when they can have control on recommender functionalities (Konstan and Riedl, 2012) and much of it depends on the design of controls and how much these align with the expectations of the users (Harambam et al., 2019). This condition can be better as-sessed by following the seventh usability heuristic (Nielsen, 1994) about flexibility and ease of use: allow the system’s interaction to be flexible to different levels of experience by using shortcuts, hidden or layered functionalities and allow users to tailor their most-used actions according to their preference.

Transparency addresses the “black-box” nature of re-commender systems to the user by unveiling the rationale of its hidden, inner processes (He et al., 2016). It determi-nes if the user is allowed to understand the inner logic of the system or the reason for a particular recommendation (Pu et al., 2011). Transparency can also be associated with the first usability heuristic of “Visibility of System Status” (Ricci, 2015) according to which “The system should always keep users informed about what is going on, through appro-priate feedback within a reasonable time” (Nielsen, 1994).

The most common strategy exploited to foster transpa-rency are explanations. Their efficacy has been determined by Herlocker (2000) in the early years of recommender sy-stem research. Explanations can be displayed with diffe-rent techniques, from a simple title or dialogue interface (Netflix, Inc, 2020) to tags (Vig et al., 2009) to histograms and grouping (Herlocker et al., 2000). Explanations are in-formation about the recommendation, that support some objectives defined by the designer. These objectives can be several; some of them have been already mentioned before in this chapter and can be others than transparency. Expla-nations can pursue scrutability if they allow telling the sy-

3.5 - Recommender transparency

Explanations

stem it is wrong, trust when they increase confidence in the system, effectiveness if they help the user in decision ma-king, persuasiveness when they nudge the user towards a particular choice, efficacy when they allow faster decisions or satisfaction if they increase the enjoyment and usability of the system (Tintarev and Masthoff, 2007).

Friedrich and Zanker (2011) have defined a possible ca-tegorisation for explanations based on three different fac-tors. First, the paradigm of the system at stake (collabora-tive, content-based, etc.). Second, the information source exploited to generate the explanation (user model, item recommended, alternative items, etc.). Third and last, the reasoning model of the explanation, “white-box” or “black-box” depending on whether the explanation is able to di-sclose or not the processes of the system, and so if in the end supports transparency or not.

The categorisation of “white-box” and “black-box” expo-ses the need to differentiate transparency from justification (Vig et al., 2009). While transparency, as already said, is me-ant to honestly describe how the system work, justification can describe the selection of items without a direct con-nection to the recommendation algorithm (Ricci, 2015) but only stating the reason why that particular selection is re-commended (He et al., 2016). In some particular cases, like for very complicated algorithms or the protection of trade secrets, justification turns out to be a preferable option sin-ce it provides more freedom in the design of the explanation (He et al., 2016; Ricci, 2015; Vig et al., 2009). Special situa-tions apart, transparency should always be preferred over justification because Kizilcec (2016) have provided strong evidence that procedural information has a more significant impact than information about outcomes when it comes to mitigating the damage on trust caused by a missed user’s expectation.

Explanations are not the only solution to pursue tran-sparency. The employment of user control and support of tinkering behaviours has already been discussed as a successful method to achieve transparency earlier. More sophisticated data visualisation techniques can represent the mechanisms of recommendation algorithms or portions of them in a meaningful or interactable way (Bostandjiev et al., 2013, 2012; Gou et al., 2011; Parra and Brusilovsky, 2015; Verbert et al., 2013). Another couple of strategies can

Transparency vs

justification

Other approaches to

transparency

Page 36: Do you trust me? - POLITesi

5756

Part 1 - Literature review 3 - Do you trust me?

be found again among the usability heuristics of Jacob Niel-sen (1994): the second and the tenth heuristics.

The second heuristic suggests matching the system with the real world that people are used to living in, by following conventions and display information naturally and logically. It means to be able to reveal the system logic in a way that follows users’ mental models so that it is easy for them to grasp from the interaction.

The tenth heuristic is “help and documentation”. It is probably the less elegant solution but is an essential fal-lback to have a thorough and explicit explanation of the en-tire system in case anyone is interested in having complete and aware comprehension of the system.

A variety of previous researches have demonstrated the influence that transparency has on other fundamental aspects of user-centred evaluation, for example, perceived accuracy revealed to be influenced by whether or not a user recognises a connection between the elicited preferences and the recommended item (He et al., 2016; Pu et al., 2011). transparency has also turned out to have a direct link to an increase of the acceptance of items, the satisfaction of the user and trust (Cramer et al., 2008; Ricci, 2015). According to Kizilcec (2016), transparency may promote but also erode trust. He supports the idea that explanations about “why, “how” and “trade-offs” can influence respectively the per-ception of competence, benevolence and integrity, the three factors of trustworthiness (see 3.2). Also, he shows that in-creased transparency leads to a lower number of miscon-ceptions.

Transparency, together with control and trust, has recei-ved substantial attention over the years from the research community and are essential evaluators for the user expe-rience in recommender systems. The ones described are all fundamental concepts for a designer to know if it wants to undertake the design of such systems.

Transparency and

user experience

Page 37: Do you trust me? - POLITesi

Goals of the research

Set up a methodology

Going through the process

Establish a dialogue

Design with the users

Building trust

Research

Page 38: Do you trust me? - POLITesi

60

Establish a dialogue.Goals of the research

4.1 - Research through design62

4.2 - Goals and expectations: research questions66

Design has its own methodologies, and they can be ap-plied to the world to experiment and research in order

to create some knowledge. The goal of this research is to use the means of Design experimenting with the concept of dialogue and explore its relationship with trust. The rese-arch reveals if enhanced communication between the user and a recommender system can ease their relationship and foster trust towards the system, and more effective service for the user.

61

Page 39: Do you trust me? - POLITesi

6362

Part 2 - Research 4 - Establish a dialogue.

4.1 - Research through design

For a long time, Design and Research have been consi-dered two clearly distinct matters, the first related to in-dustrial practice, the other to academic experiments. Now, since the last three or four decades, Design in its different expressions grew an academic basis, became a universi-ty-taught subject and developed its own research culture. This occurrence brought to introduce research activities as a formal part of the design process, a practice known as research for Design, but also to make design activities and artefacts responsible for a substantial contribution to the process of generating and communicating knowledge, the designerly way of doing research, usually referred to as Re-search through Design (RtD) (Stappers and Giaccardi, 2014).

The term “Research trough (Art and) Design” was first mentioned by Frayling (1993), former rector of the Royal College of Arts during a speech where he distinguished three possible relationships for Research and Design:

- Research into Design is when the topics or methodolo-gies of Design are the targets of Research;

- Research through Design is when design practices and methodologies are the tools used to do research and create knowledge;

- Research for Design is when Research is a part of the design process and informs the design of an artefact.

Even if RtD belongs to design in general and often even to fields other than Design that acquired some of its practi-ces over time, Human-Computer Interaction and Interaction Design communities have been the most prolific in discus-sing the concept of RtD. Reasons for this particular atten-tion can be found on the origin of the disciplines, occurred in constant relationship with computer science in research universities and so with a natural academic culture or in the struggle to shape into practical applications abstract problems and complexities raised by the opportunities of information technology (Stappers and Giaccardi, 2014). In this context, two renowned authors are Zimmerman and Forlizzi, that pushed the discussion od RtD among the Hu-man-Computer Interaction community and that in their pu-blications describe RtD as follows:

Research

into/through/for

Design

RtD and

Interaction design

“Research” and

“Design”

Fig.11

“...we intend the term design research to mean an inten-tion to produce knowledge and not the work to more imme-diately inform the development of a commercial product...”

(Zimmerman et al., 2007, p. 494)or later:

“...a research approach that employs methods and pro-cesses from design practice as a legitimate method of in-quiry”.

(Zimmerman et al., 2010, p. 310)

The activities of Research and Design have long been considered very different and in some ways, even opposi-te. Usually, the purpose of Research is considered to be the production of generalised, abstracted, theoretical knowle-dge oriented towards long-term reuse by others in different areas, whether for Design is the creation of a specific solu-tion, realised for a specific context, here and now.

Despite this, they are both aimed at the generation of so-mething new based on what was already known, so they end up being quite similar from another perspective.

They also have parts of each other in their activities. Analysis and evaluation are always present in the design process even if they belong to the realm of research practi-ce while research projects often include the design and de-velopment of devices or stimuli for experiments.

The aimed-for results of research and design are often different.(Stappers and Giaccardi, 2014)

Page 40: Do you trust me? - POLITesi

6564

Part 2 - Research 4 - Establish a dialogue.

The connection between Design and Research is clear and is possible to describe different ways by which they in-fluence each other, analysing the three relationships men-tioned by Frayling in his speech at the Royal College of Arts (Stappers and Giaccardi, 2014).

Research for Design is when Design is informed by Re-search (Fig. 12). Doing research is a part of doing design as already said. Design processes are full of established acti-vities that come under “research” like observation, measu-rement, interview, literature review, analysis and in particu-lar when it comes to user-centred (see 5.2) methodologies with user-research activities like user testing for insight or validation. Nowadays, it is a desired skill for the designer to be able to collect and interpret scientific knowledge and even generate some new when needed for the correct deve-lopment of the design (Stappers and Giaccardi, 2014).

Research for Design

Research through

Design

Fig.12

Research for design.(Stappers and

Giaccardi, 2014)

Research through Design is when design practices and methodologies are the tools used to do research and create knowledge (Fig. 13). RtD is doing design as a part of doing research. It is the ensemble of design activities that contri-bute to the generation of new knowledge. Actions that are usually related to design, to the profession and the skills of the designer, or to its methodologies such as gain under-standing of complex situations by framing and reframing, iterate, diverge and converge on problems with a Design Thinking (see 5.2) approach.

This contribution of Design can happen at two different scales. Design can be included in Research if it is just a tool to provide stimulus material, like a device or a prototype that contributes to the process of generating knowledge by opening up new opportunities. This situation is when Design is just a part of Research.

In other cases, the entire research process can be held in a “designerly” way, following design processes. The cre-ation of stimulus material and other design activities not only open up new opportunities for investigation, or provo-cate discussions or debates by bringing to existence new situations, the generation of knowledge, in this case, is also brought during the making of those stimuli, or prototypes. Where the designer struggles with opportunities and con-straints, often including the goals or constructs of the rese-arch, they get to be compared among them and with other empirical realities in the world. The designer gets to drive the research across real-world obstacles, and this Desi-gn Thinking process brings insights that can be translated into usable knowledge and shared (Stappers and Giaccardi, 2014).

The process of designing and making puts the designer in front of several confrontations like competing or conflicting background knowledge, balancing theory and technology or dream and reality. The act of design provokes a particular cognitive process able to explicate tacit values and latent needs (Stappers, 2014).

Fig.13

Research through design(Stappers and Giaccardi, 2014)

Page 41: Do you trust me? - POLITesi

6766

Part 2 - Research 4 - Establish a dialogue.

Research into Design is when the topics or methodolo-gies of Design are the targets of Research (Fig. 11). Often (not always) this kind of research is held in Research throu-gh Design manner since those who develop design metho-dologies may want to do it in a designerly way. In these ca-ses, Design covers a doubled role (Fig.14). It becomes both the object of study and part of the way the research is con-ducted.

Similarly, designers cover both the roles of researchers developing the tools, techniques or methods and of reci-pients when they become the users of them (Stappers and

Giaccardi, 2014). This last is also the case for the research of this the-

sis, where a Design Thinking, User-Centred methodology is exploited to research and study the effects of Design over the specific Human-Computer Interaction issue of “trusting recommender systems” in order to produce knowledge that can drive future design activities directed at this particular field.

Fig.14

Research through design into design.

(Stappers and Giaccardi, 2014)

As mentioned in the introduction, it is so important to find new ways to interact with technology, improve commu-nication with ever-evolving technologies is crucial from the moment it becomes autonomous and exploits processes

4.2 - Goals and expectations: research questions

that are already slightly beyond our full comprehension. The reason for this research is to begin an exploration

around Trust and technology, develop the concept of dialo-gue in recommender systems and evaluate its efficacy.

Experimenting through the means of Design to find new ways to improve on these values, build a better relationship between people and those technologies that can ease their life, without worrying that their interests are being regar-ded.

The research focuses on experimenting with the tran-sparency of the system, the control available to the user and the dialogue that creates between them. The goal of the research is twofold. First, experiment how Design can eva-luate the quality of these factors in existing recommender systems and during the design process. Second, identify good design patterns that can improve Trust in recommen-der systems, through dialogue.

In order to achieve these research goals both each of them is divided into two, more pragmatic objectives that will guide the research process, that with their results will define the answer to the main two questions.

For the evaluation of recommender system quality, is es-sential to identify which elements of interaction are percei-ved by users on one side, and understand how significant is their impact on user perception of Trust on the other.

Since Trust is a personal evaluation, it is not possible to exclude people’s perception from the evaluation. For the identification of good design patterns, the first requirement will be to identify all the elements of interaction that ease dialogue between the system and the user and then demon-strate that dialogue is an effective way to foster Trust.

To sum up, research questions (R.Q.) and experimental question (E.Q.) are:

R.Q. 1: How can we evaluate the degree of “control on” and “transparency of” recommender system actions during the design process?

- E.Q. 1.1: Which of the recommender systems pro cesses and components are perceived or not by the user?

- E.Q. 1.2: Which of the recommender system com

Research questions

Research into

Design

Page 42: Do you trust me? - POLITesi

68

Part 2 - Research

ponents have the most significant impact on the sense of control and transparency perceived by the user?

R.Q. 2: Are there any design patterns to exploit dialogue in order to deliver trustworthy experiences with recommen-dation agents?

- E.Q. 2.1: Which elements of interaction ease dialo gue and increase awareness between the user and a recommender system?

- E.Q. 2.2: Can dialogue be an effective way to foster trust?

By collecting insights and answering these questions, the expectation is to be able to draw a set of guidelines for designing better, trustworthy, interactive recommender systems. The guidelines will consolidate all the knowledge coming from the research to make it reusable by others and hopefully will validate the concept of dialogue and its effi-cacy in fostering Trust.

Design withthe user.Set up a methodology

5.1 - Identify boundaries of research: entertainment services70

5.2 - Design thinking & User-Centred Design71

5.3 - Research roadmap

5.4 - Address research questions

74

78

This chapter presents the methodology followed in tackling the research questions and achieving the goals

of the thesis. First, the boundaries set for the field of rese-arch, then an overview of design methodologies to under-stand the guiding principles. The organisation of the actual methodology and set of research activities is described to draw a precise process to follow. In the end, the process is discussed in relation to research questions to justify the suitability of such a methodology for the aims of this rese-arch.

69

Page 43: Do you trust me? - POLITesi

7170

Part 2 - Research 5 - Design with the user.

5.1 - Identify boundaries of research: entertainment services

Before starting to plan the research, it is vital to set boundaries to the context, in order to define and clarify the operating space and create a metaphoric “uncontaminated laboratory” for the activities of the experimentation.

In particular, it is essential to make a note about the selected “space”. Despite recommender systems and re-commendations are present in all kinds of services, in many various fields, for this particular research only the recom-mender systems that are part of entertainment services (like multimedia streaming services, on-line movie/music/video game libraries or similar) would be considered. The reasons for selecting this particular subset are several:

First of all, together with e-commerce, these services are very well diffused and known by a broad base of users, so it is easy to discuss them with people during user-cen-tred design activities (see next section).

Similarly, academics held an extensive discussion on entertainment and e-commerce, making it easier to retrie-ve a solid base of literature about them. As an example, the Streaming service Netflix pushed academic research in the field of recommender systems with the Netflix prize (Ben-nett and Lanning, 2007) a money prize for the first research group that would have achieved a 10% increase on the ac-curacy of the service’s algorithm.

The last and most important reason is to have a mana-geable environment to research the complex dynamics re-lated to trust, trying to avoid added complexity coming from other fields of application like finance, healthcare, news or social networks. Entertainment is a relatively safe environ-ment for the users. They are not spending money based on a service recommendation like they do on e-commerce, or invest that same money in insurance or financial products. They are nor getting recommendations towards important long-term decisions like in real estate platforms, and nei-ther risking their safety with healthcare or legal recommen-dations. They are not receiving news and information that can influence or worse manipulate their ideas or even poli-tical orientation. They are not influencing their social rela-tionships in social media. And so on.

User base and

literature

Reduce complexity

5.2 - Design thinking and User-Centred Design

Entertainment recommendations involve only short-term and low-risk decisions. Of course, entertainment pro-ducts can influence people ideas and their perception of the world, but this is not present in users’ intentions very often and does not affect their choices. In a typical situation, the most significant risk for users is losing two hours of their lives in front of a bad movie. Time has its value, of course, but almost everybody in their life will decide to spend two hours watching a bad movie just to shut down the brain. It is not a big deal. This fact makes multimedia and entertain-ment services a suitable environment to make research on the relationship between trust and recommender systems. They have fewer elements that play and can affect the trust of the user towards the system. They provide a manageable situation to carry a study on how the design system’s inte-ractions impact the relationship with the user.

In scientific experiments, the good practice is to reduce the elements of the system to tend toward a “controlled” environment and to be able to isolate the cause-effect re-lationships avoiding the contamination of results with in-fluences from external agents.

At a later stage, it will be then possible to pick the fin-dings of this “isolated” research environment and apply them to more complex ones, discover if they can be scaled to become general rules and observe how the inclusion of new factors can move the balance of the system in different directions.

Design thinking refers to the particular approach of desi-gners at problem-solving (“Design Thinking, Essential Pro-blem Solving 101- It’s More Than Scientific,” n.d.). Today has become a popular term and indicates a particular set of practices, methodologies and processes developed on the base of that original concept, and applied to any kind of bu-siness or human activity.

The first seed of the idea that there are some patterns in “design thought” dates back to 1969 (Simon, 2008). Pe-ter Rowe (1987) first defined the term in his homonymous

Low-risk decisions

Controlled

environment

History of the term

Page 44: Do you trust me? - POLITesi

7372

Part 2 - Research 5 - Design with the user.

book. It then became famous thanks to Buchanan (1992). More recently the concept has been stretched beyond its domain limits. Today, Design Thinking is written in capital letters and has become a structured thinking process and a set of methodologies often used to introduce design cultu-re into different fields of application (Tschimmel, 2012). An increasing number of companies are adopting this process for innovation, and most of the most prominent design con-sultancies like IDEO are not only adopting it from before the term was even defined, but they are teaching these metho-dologies to the majority of their clients.

Different institutions defined many different models of the process over the years. One of the most famous is un-doubtedly the “Double Diamond” (Fig. 15) framework set up by the British Design Council that, with its peculiar shape, stresses and highlights the idea that Design Thinking is a sequence of divergent and convergent thinking in analytical and synthetical mental processes.

Fig.15

Design Thinking

process

The double diamond model of the Design

Thinking process(British design council, 2005)

The model considered to draft the methodology of this research is the one developed by the Hasso Plattner Insti-tute of Design at Stanford University (Fig.16) . This model is also the one promoted officially by the Interaction Design Foundation (Friis Dam and Siang Teo, 2020).

The idea is that the Design Thinking process is made of five distinct stages: Empathise, Define, Ideate, Prototype and Test. A peculiar characteristic is that these stages are not necessarily sequential. They can be parallel, even out of order, and most importantly, they are iterative (Friis Dam and Siang Teo, 2020).

Fig.16

Five stages

The d.school’s (Hasso Plattner Institute of Design at Stanford) Design Thinking Process

In-dept, these five stages are:- Empathise to understand the problem (typically throu-

gh user research) from the user perspective by dismissing personal assumptions about the reality and embrace user needs and insights.

- Define the problem by synthesising the information ga-thered and analysing the observations collected during the empathy phase.

- Ideate from the solid ground of information created in the two first phases, challenge assumptions and look for alternative perspectives of the problem while focusing on innovative solutions.

- Prototype cheap, fast and scaled-down versions of the final solution to experiment with them, investigating as many possibilities and identifying the best-performing ones among those generated in the previous phase.

- Test the prototypes and evaluate the solutions, this would eventually complete the process but, usually, it ge-nerates results that are more useful to redefine the pro-blem, rehash solutions or better understand user needs, in the iterative fashion that is typical of this process.

Page 45: Do you trust me? - POLITesi

7574

Part 2 - Research 5 - Design with the user.

The process can reframe wicked problems in a user-cen-tred way and allows designers to target what is most rele-vant for people.

User-Centred Design (UCD) is a design methodology that as well as Design Thinking has an iterative process and puts the users and their needs at the core of the design activity. It focuses on usability and accessibility of the product by having a holistic approach on the user experience. The term was coined by Donald Norman and his team at the Univer-sity of California in San Diego and brought to the public by two books of Norman itself: User-Centered System Design: New Perspectives on Human-Computer Interaction in 1986 and later The Design of Everyday Things one of the most im-portant publications for User Experience Design and other related fields.

While the Design Thinking and the UCD processes are al-most identical, the choice falls on the first because it divi-des the design phase into the ideation and the prototyping stages and is more explicit in general in the definition of the boundaries of each stage. Although one of the essen-tial principles of UCD, user participation, will be considered to design the activities of the research. Users will be invol-ved in most of the activities of the research, with different methodologies and with different degrees of participation, in order to investigate their needs, thoughts, expectations, requirements and involve them in the frontline of the design process.

The research methodology planned follows the five sta-ges of the Design Thinking model described in sequential order, with the addition of a preliminary phase. A complete mapping considering the complexity of relationships and ti-ming of the activities is drawn at the end.

In the preliminary stage, desk research has been carried on with the collection and review of academic literature bi-bliography and other sources available online. Part of this knowledge is reported in the first part of this thesis (from chapter 1 to chapter 3) as the essential information needed to introduce and understand the topics and the field of re-search. The entirety of it provides a necessary ground and

User-Centred Design

Research process

5.3 - Research roadmap

Desk research

(Preliminary)

Survey

(Empathise)

foundation for the set-up and the development of the fol-lowing research, approaching it with the needed awareness of the topics at stake and a good understanding of the pre-vious research and the identification of a research gap to investigate.

In the Empathy stage, a survey is used to collect quan-titative and qualitative information on a large scale. The desired output of this activity is double: quantitative data helped to define a set of user profiles, while qualitative in-formation nourished a “Mental Model Diagram” (Kalbach, 2016). User profiles are used to categorise information at first, and then to spot a representative for each profile to be used as “user personas”.

Personas are among the interaction design tools (Saffer, 2010) and Design Thinking tools (Friis Dam and Siang Teo, 2020) used to summarise the finding and insights from the user research into fictional characters to design for their needs and behaviours. In the specific case, instead of a fi-ctional character, real people are selected to be the best match with the quantitative data of the specific profile.

The Mental Model Diagram (Kalbach, 2016) instead is used to map mental models of the users and as a tool for comparison. The term “mental model” originates from psychology and indicates a subjective representation of how something in the world works, it is a cognitive construct to frame reality, it is only a perception of how something works, not how it actually works. Also, mental models of the user and the designer do not match (Norman, 2013). Mental Model Diagrams can organise the thoughts, emotions and guiding principles of the users. Whatsmore, this tool can be used as an alignment diagram (Kalbach, 2016) to align product features to the mental models that are supported by each specific feature. Mapping mental models is useful to design without personal assumptions while aligning pro-duct features allows spotting opportunities in unsupported models to design new functionalities or misaligned models to think about redesigning a part of the product.

For the Definition phase, the goal is to draw up a thou-ght index of elements of interaction related to the establi-shment of dialogue (see 3.3) with recommender systems. The sources for this census of elements are, first of all, the literature from the preliminary stage, second, the features suggested from users in the survey, and third, a selection of

Case studies

(Define)

Page 46: Do you trust me? - POLITesi

7776

Part 2 - Research 5 - Design with the user.

case studies that are analysed to identify successful solu-tions, design patterns, and evaluate usability.

At the Ideation stage, user participation has its most robust instance. The selected representative for each user profile is interviewed. The interview investigates the topics at stake, like trust and its role in the interviewee’s profes-sional and personal life or the relationship with technology, in particular with recommendations. This first part aims at collecting qualitative information and be able to discuss, at a deeper level compared to the survey, the needs and expectations of the users. The interview is not only a tool for user research to integrate the comprehension of the mental models but also a tool for CoDesign. It involves users in the ideation phase of the design process by discussing possi-ble solutions with them. It asks them to come up with ide-as to satisfy their needs, ideas that could become part of the “elements of interaction” index or guide the prototyping phase.

Since Prototyping is not intended for actual development in this research but only to collect insights and verify the efficacy of specific design patterns, the strategy adopted is the one of “rapid prototyping”. Rapid prototyping focu-ses on the deployment of prototypes at high speed, without getting stuck on details or structured solutions. Rapid pro-totyping aims at collecting feedback and iterating as soon as possible, for this reason, it is an excellent strategy to fol-low in order to collect insights for the research, in alignment with the Design Thinking iterative process.

In order to test the prototypes and collect insights, a fo-cus group of users is organised to discuss the prototypes altogether. This activity supports the discussion among the users about the topic at stake. The participants selected are representatives of user-profiles involved during the in-terviews. The focus group is held online, to be able to elicit discussions at speed, following the same rhythm of pro-totyping, without the typical delays of organising a meeting. This practice also permits a higher number of iterations with a lower effort for the participants.

As described earlier (see 5.2) the Design Thinking pro-cess is not linear. Some of the stages just described hap-pen in parallel, they are iterated in the light of new findings, and they have different relationships between each other. The map in the next page (Fig. 17) shows a more realistic

Interview and

CoDesign

(Ideate)

Rapid prototyping

(Prototype)

On-line focus group

(Test)

Roadmap and

research findings

(synthesis )

Fig.17The roadmap of activities conducted during the research.

Page 47: Do you trust me? - POLITesi

78

Part 2 - Research

Buildingtrust.Going through the process

6.1 - Analise the state of the art80

6.2 - Understand users’ mental models81

6.3 - Define the elements of interaction

6.4 - CoDesign with the users

6.5 - Prototype, test, iterate

111

127

135

Going along the process is the best way to explore and show all the insights of the research, explaining desi-

gn choices and the relationship between each phase. In the end, the results will validate or contradict the hypothesis and conclusions can be drawn.

79

roadmap of the sequence and interdependencies of the activities described and the data, information or knowledge expected from each of those activities.

At the end of the process, in line with research practices, a synthesis stage is considered, in order to collect, organi-se, and synthesise all the research findings into an artefact able to deliver this ensemble of information in a useful way for further research.

The following diagram (Fig. 18) shows how each stage of the process contributes to the progress of the research by addressing one or more of the experimental questions to draw conclusions about the research questions in the syn-thesis phase.

5.4 - Address research questions

Fig.18

How each phase of the process address

an advancement towards the research

questions.

Page 48: Do you trust me? - POLITesi

8180

Part 2 - Research 6 - Building trust.

6.1 - Analise the state of the art

As mentioned in chapter five (see 5.3) desk research and literature review is a fundamental starting point for all the following activities and becomes a crucial resource when Design is used to do Research, for the generation of know-ledge (see 4.1).

The most relevant information from this preliminary activity, that span in parallel throughout all the subsequent stages in a continuous inspired/informing process, is sum-marised in the first three chapters, to provide the basic knowledge needed to understand the motivation, the me-dium and the goals of the thesis.

The topics explained in chapter 1 are essential to under-stand the motivations for this research. They give an under-standing of two opposite dangerous behaviours generating from the relationship between technology and trust. On one side, the risks of ignoring or underestimating the influence that such technologies can have, and how negative they can be (see 1.1).

On the other side, the risk of missing opportunities be-cause of a techno-paranoia that undermine technology acceptance and produce a state of stress, or even fear, in people’s lives (see 1.2).

In the end, this chapter is useful to understand that trust issues are the base of problems that motivate the thesis and explains why “trust” is the pursued value. It also explains how “awareness” can be the right weapon to tackle these issues (see 1.3), and later is explained how “dialogue” is a proper implementation for reciprocal awareness between users and recommender systems (see 3.1).

Chapter 2 focus on the medium, on the technology, con-sidered to conduct the research: recommender systems. The chapter explains why (see 2.1) and how (see 2.2) this technology has been developed. Recommender systems are described through their differences, applications and the way they work (see 2.3), even how they will in the fu-ture (see 2.4). It is essential to understand the technology in all its parts, to have a clear picture of its opportunities and weaknesses, to know how to handle it, what is possible and how it could improve in the future. Beyond the technical features, in the end, also the specific ethical issues, raised by recommender systems, are discussed in order to consi-

Chapter 1:

motivations

Chapter 2:

medium

6.2 - Understand users’ mental models

der the consequences of making some choices over others when designing with this particular technology (see 2.5)

Chapter 3 introduces the themes of Human-Computer Interaction, which topics are investigated and researched by the Interaction Design community and which is the sup-port they bring to the discussion and development of re-commender systems. In particular, the main concepts that drive the design goals of the research are positioned inside the more complex discourse of recommender system inte-raction and user-centred evaluation (see 3.1) to understand what has already been done and what would be the contri-bution of this research. The rest of the chapter focuses on these concepts in order to describe, discuss and investigate them, one by one. Understand the value of “trust” (see 3.2) for the evaluation of recommender systems. Integrate the efficacy of “awareness” discussed in the first chapter (see 1.3). To introduce and explain the new concept of “dialogue” (see 3.3), the main contribution of the thesis. To define the two representatives in the establishment of dialogue: user “control” (see 3.4) and system “transparency” (see 3.5).

Obviously, the literature included in these three chap-ters is only a part of the literature consulted, the majority of concepts that are only briefly mentioned but never explai-ned because they will not contribute to the main discussion, have been explored in order to exclude them with propriety. All of these are not included but considered during the re-search. For example, among all the publications and online services consulted, a considerable number of case studies appear. Only a small, significant part of them is discussed later (see 6.3). However, most of them affected the draft of the index of elements of interaction (see 6.3), some of them are used as examples during the survey (see 6.2), during the interviews (see 6.4) and also as an inspiration for pro-totyping (see 6.5). For academic research, literature review and desk research, assume a foundational role of ground knowledge for all the following research activities.

Chapter 3: concept

Tip of the iceberg

The survey investigates users’ differences, thoughts, experiences and worries in order to empathise with them.

The objectives of the survey are to draft users profiles,

Design of the

survey

Page 49: Do you trust me? - POLITesi

8382

Part 2 - Research 6 - Building trust.

based on relevant factors and to map out mental models about technology and recommender systems. The survey is implemented with Google Forms, a free tool for creating surveys offered by Google. The survey is provided in two dif-ferent languages (Italian and English) and published online through Facebook and WhatsApp contacts.

Fig.19

The first section of the survey: language

selection

The survey has several sections with precise goals:- The landing section is the language preference that le-

ads to an exact copy of the survey in the chosen language. - An informative section welcomes participants explai-

ning the reasons for the survey. Declares the usage of data and asks for a privacy agreement. Then reassures the parti-cipant about how much time he would need until the end of the task, and stimulates participation with the promise of a small reward as an incentive for finishing the entire que-stionnaire.

- The first of the four main sections is a standard demo-graphics questionnaire to be able to understand if any per-sonal or social characteristics have any recurrent influence on the answers of the rest of the survey and also to be sure that participants represent a sufficient variety of the popu-

lation. Information collected covers age, gender, location, education and occupation.

- The second main section is a questionnaire about te-chnology and privacy habits. This section aims at comple-ting the profiling of participants together with demographi-cs, adding characteristics relevant to the research. Also, it starts to investigate mental models about trust in tech-nology. The two main characteristics used for profiling are presented as scales with values from 1 to 5 and investigate “technology savviness” and “privacy care” of the partici-pants. Values of these two characteristics build up a 3x3 chart (values 1,2 and 4,5 grouped as “low” and “high”) with user profiles based on the intersections of their values. At the end of this and next sections, it is possible to interrupt the task sending only the answers collected to this point.

- The third section starts with a brief introduction about what the term “recommender systems” refers to so that everyone is aware of which is the topic of the questions. This section about recommender systems and mental models investigates how people perceive the functions and proces-ses in action during the use of a recommender system. The main stages investigated with open-ended questions are the “input” phase (how do you think these systems learn what do you like?), the “process” phase (how do you think these systems choose which item to show you based on your interests?) and how they can “control” and influence the recommender actively (Do you know or you can imagi-ne some ways to control what these systems “think” about you?). Levels of trust and need of control are also investi-gated to understand intentions towards recommender sy-stems. Participants can stop and send answers at this point if they want.

- The last section is made of only one question to investi-gate perceived problems, risks and worries, it contributes to compiling mental models while bringing out flaws of the technology and opportunities for improvement.

- Only participants that went through all the question-naire can select a reward at this point. Before they send their answers, participants are asked if they are willing to participate again to further activities of the research, to col-lect volunteers for interviews (see 6.4).

Page 50: Do you trust me? - POLITesi

8584

Part 2 - Research 6 - Building trust.

Fig.20

The flow-chart of the sections of the survey.

The survey received 252 answers. Of these, 237 went through the entirety of the survey, 4 stopped before the last section and 11 before recommender system sections.

Demographics of the sample have enough variety, for the only exception of location: 89.3% (225) of participants answered in Italian, and 51,9% (14) of the remaining parti-cipants that answered in English are Italians. Gender has a balanced representation with 52.4% (132) of females, 46% (116) of males and only 1.6% (4) that rather not specify. Par-ticipants below 19 years old are only 2.4% (6) of the sam-ple because of the exclusion of minors. The rest of the age groups are well represented with a majority of “millennials” between 19 and 35 years old, followed by “baby boomers” between 36 and 55 years old. “Over 55” are few but still con-siderable with 6.7% (17).

Results of the survey

Fig. 21

Fig. 22

Gender of the sample:a) Female (132)b) Male (116)c) Not say (4)

Age of the sample:a) <19 (6) b) 19-25 (77)c) 26-35 (82)d) 36-45 (25)e) 46-55 (45)f) >55 (17)

Level of education peaks at Bachelors and goes down at both extremes with 2% (5) of PhDs and 5.7% (13) of peo-ple with less than a high school diploma. Occupation has so many different answers but is important to highlight some majorities: Students are almost one-third of the participan-ts with 29.8% (75), employees an 18.3% (46) and designers 8.7% (22). Small groups that cover between 5% and 2% each are, from the biggest group to the smallest, freelance (12),

Page 51: Do you trust me? - POLITesi

8786

Part 2 - Research 6 - Building trust.

unemployed (11), engineers (8), teachers (7), retired (6), and entrepreneurs (5). The remaining 23.8% (60) of people are miscellaneous: lawyers, nurses, medic, plumbers, consul-tants developers, artists and others.

Fig. 23

Fig. 24

Educationa) Less than high

school (13)b) High school (80)

c) Bachelor (81)d) Master (67)

e) PhD (5)

Occupation:a) Student (75)

b) Employee (46)c) Designer (22)

d) Freelance (12)e) Unemploied (11)

f) Engineer (8)g)Teacher (7)h) Retired (6)

i) Entrepeneur (5)j) Other (60)

lack of control or transparency. The big gap of 24.5% (64) between who is worried and who is willing to give up a good service can have different interpretations: they could feel they do not have a choice, they could lack awareness, or just do not care. More neutrally, the balance between the ri-sks perceived and the value created from services still wei-ght more in favour of them, but the room for improvement in trust and confidence is undeniable.

For what concern technology use, 95% (239) of partici-pants use digital services that operate a recommender sy-stem at least once a day, and the 76.6% (193) even more than that. A vast majority of participants, 85,7% (216), claims to be interested in understanding how these tech-nologies work, while only 11.9% (30) says they just care that they do what they should do. More than half of the partici-pants, 58,3% (147) say that they try to guess how these ser-vices work by their use, while the remaining 27.4% (69) of interested tries to inform about them actively. From stati-stics, we can assume a diffused need for transparency that, in the majority of cases should pass through the design of the services rather than from documentation. Another inte-resting couple of questions reveal a lot about the feelings of users toward technology. While 71% (179) felt worried about being controlled or manipulated by a digital service, only 46.5% (115) claims that it would give up a good service to avoid to give their information. This finding underlines a shared sense of weakness concerning such services and a

Fig. 25

Fig. 26

Fig. 27

Fig. 28

Frequency of use of services with recommender systems:a) Once a month (2)b) Once a week (11)c) Once a day (46)d) More than once a day (193)

Interest in information about technologies:a) Yes, guessing by usage (147)b) Yes, actively informing (69)c) No (30)d) Other (6)

Worried of manipulation:a) Yes (179)b) No (73)

Would leave a good service for privacy:a) Yes (115)b) No (137)

i

j

Page 52: Do you trust me? - POLITesi

8988

Part 2 - Research 6 - Building trust.

Two questions among others are intended not only to investigates people habits with technology and privacy but also to be able to profile them into different “user profiles” on the base of their “technology savviness” and their “pri-vacy care”.

From the answers to the question “How would you judge your ability in the use of digital technologies?” participants have been divided based on their technology savviness into three categories:

- Novices: 17% (43)- Experienced: 27% (68)- Experts: 56% (141)From the answer to the question “How much do you acti-

vely care to protect your privacy?” participants have been divided based on their privacy care into three categories:

- Careless: 31% (79)- Aware: 39% (98)- Careful: 30% (75)By crossing the two categorisations, nine distinct “user

profiles” have been identified (Fig. 29) for the user research and the rest of the profiling answers have been assigned to the specific cluster to describe them.

Profiles

Fig.29

Fig.30

The chart of user profiles and the

distribution of participants.

Some of the question about technology used for profiling

Page 53: Do you trust me? - POLITesi

9190

Part 2 - Research 6 - Building trust.

Careless Novice

Fig. 31

Fig. 32

Fig. 33

Fig. 34

Careless Novices’ gender:

a) Female (9)b) Male (5)

c) Not say (0)

Careless Novices’ age:a) <19 (0)

b) 19-25 (1)c) 26-35 (1)d) 36-45 (1)e) 46-55 (5)

f) >55 (3)

Careless Novices’ education

a) Less than high school (2)

b) High school (6)c) Bachelor (5)

d) Master (0)e) PhD (1)

Careless Novices’ worried of

manipulation:a) Yes (7)b) No (7)

Fig. 35

Fig. 36

Fig. 37

Fig. 38

Careless Novices’ privacy knowledge:a) None (4)b) Low (7)c) Medium (2)d) High (0)e) Expert (0)

Careless Novices’ trust in technology:a) None (0)b) Low (3)c) Medium (9)d) High (2)e) Blind (0)

Careless Novices’ trust in recommendations:a) None (2)b) Low (5)c) Medium (3)d) High (2)e) Blind (1)

Careless Novices’ need of control:a) Passive (5)b) Low (2)c) Neutral (4)d) High (1)e) Active (2)

The resulting user profiles and their characteristichs are:Careless Novice 5.5% (14)

Page 54: Do you trust me? - POLITesi

9392

Part 2 - Research 6 - Building trust.

Careless Experienced 12% (30)Careless Experienced

Fig. 39

Fig. 40

Fig. 41

Fig. 42

Careless Experienced gender:

a) Female (16)b) Male (14)

c) Not say (0))

Careless Experienced age:

a) <19 (0)b) 19-25 (7)

c) 26-35 (12)d) 36-45 (3)e)46-55 (5)

f) >55 (3)

Careless Experienced education

a) Less than high school (0)

b) High school (14)c) Bachelor (10)

d) Master (6)e) PhD (0)

Careless Experienced worried of

manipulation:a) Yes (18)b) No (12)

Fig. 43

Fig. 44

Fig. 45

Fig. 46

Careless Experienced privacy knowledge:a) None (10)b) Low (17)c) Medium (2)d) High (1)e) Expert (0)

Careless Experienced trust in technology:a) None (0)b) Low (2)c) Medium (17)d) High (9)e) Blind (2)

Careless Experienced trust in recommendations:a) None (2)b) Low (9)c) Medium (12)d) High (6)e) Blind (0)

Careless Experienced need of control:a) Passive (3)b) Low (8)c) Neutral (9)d) High (7)e) Active (2)

Page 55: Do you trust me? - POLITesi

9594

Part 2 - Research 6 - Building trust.

Careless Expert 14% (35)Careless Expert

Fig. 47

Fig. 48

Fig. 49

Fig. 50

Careless Experts’ gender:

a) Female (16)b) Male (18)

c) Not say (1)

Careless Experts’ age:a) <19 (2)

b) 19-25 (11)c) 26-35 (15)

d) 36-45 (2)e) 46-55 (1)

f) >55 (0)

Careless Experts’ education

a) Less than high school (1)

b) High school (8)c) Bachelor (12)

d) Master (13)e) PhD (1)

Careless Experts’ worried of

manipulation:a) Yes (21)b) No (14)

Fig. 51

Fig. 52

Fig. 53

Fig. 54

Careless Experts’ privacy knowledge:a) None (5)b) Low (13)c) Medium (15)d) High (2)e) Expert (0)

Careless Experts’ trust in technology:a) None (1)b) Low (0)c) Medium (7)d) High (22)e) Blind (5)

Careless Experts’ trust in recommendations:a) None (2)b) Low (9)c) Medium (13)d) High (10)e) Blind (0)

Careless Experts’ need of control:a) Passive (4)b) Low (5)c) Neutral (15)d) High (8)e) Active (2)

Page 56: Do you trust me? - POLITesi

9796

Part 2 - Research 6 - Building trust.

Aware Novice 6% (15)Aware Novice

Fig. 55

Fig. 56

Fig. 57

Fig. 58

Aware Novices’ gender:

a) Female (12)b) Male (3)

c) Not say (0)

Aware Novices’ age:a) <19 (1)

b) 19-25 (3)c) 26-35 (0)d) 36-45 (4)e) 46-55 (4)

f) >55 (3)

Aware Novices’ education

a) Less than high school (3)

b) High school (6)c) Bachelor (2)

d) Master (4)e) PhD (0)

Aware Novices’ worried of

manipulation:a) Yes (12)

b) No (3)

Fig. 59

Fig. 60

Fig. 61

Fig. 62

Aware Novices’ privacy knowledge:a) None (1)b) Low (7)c) Medium (5)d) High (2)e) Expert (0)

Aware Novices’ trust in technology:a) None (0)b) Low (6)c) Medium (3)d) High (6)e) Blind (0)

Aware Novices’ trust in recommendations:a) None (3)b) Low (6)c) Medium (4)d) High (1)e) Blind (0)

Aware Novices’ need of control:a) Passive (4)b) Low (3)c) Neutral (5)d) High (1)e) Active (1)

Page 57: Do you trust me? - POLITesi

9998

Part 2 - Research 6 - Building trust.

Aware Experienced 9% (23)Aware Experienced

Fig. 63

Fig. 64

Fig. 65

Fig. 66

Aware Experienced gender:

a) Female (16)b) Male (6)

c) Not say (1)

Aware Experienced age:

a) <19 (0)b) 19-25 (5)c) 26-35 (6)d) 36-45 (2)e) 46-55 (9)

f) >55 (1)

Aware Experienced education

a) Less than high school (2)

b) High school (9)c) Bachelor (8)

d) Master (4)e) PhD (0)

Aware Experienced worried of

manipulation:a) Yes (14)b) No (11)

Fig. 67

Fig. 68

Fig. 69

Fig. 70

Aware Experienced privacy knowledge:a) None (1)b) Low (9)c) Medium (11)d) High (2)e) Expert (0)

Aware Experienced trust in technology:a) None (0)b) Low (1)c) Medium (14)d) High (8)e) Blind (0)

Aware Experienced trust in recommendations:a) None (1)b) Low (9)c) Medium (7)d) High (4)e) Blind (0)

Aware Experienced need of control:a) Passive (0)b) Low (7)c) Neutral (9)d) High (4)e) Active (5)

Page 58: Do you trust me? - POLITesi

101100

Part 2 - Research 6 - Building trust.

Aware Expert 24% (60)Aware Expert

Fig. 71

Fig. 72

Fig. 73

Fig. 74

Aware Experts’ gender:

a) Female (24)b) Male (36)

c) Not say (0)

Aware Experts’ age:a) <19 (2)

b) 19-25 (24)c) 26-35 (24)

d) 36-45 (5)e) 46-55 (4)

f) >55 (1)

Aware Experts’ education

a) Less than high school (2)

b) High school (13)c) Bachelor (25)

d) Master (18)e) PhD (2)

Aware Experts’ worried of

manipulation:a) Yes (40)b) No (20)

Fig.75

Fig. 76

Fig. 77

Fig. 78

Aware Experts’ privacy knowledge:a) None (1)b) Low (10)c) Medium (25)d) High (15)e) Expert (4)

Aware Experts’ trust in technology:a) None (1)b) Low (2)c) Medium (16)d) High (34)e) Blind (7)

Aware Experts’ trust in recommendations:a) None (0)b) Low (7)c) Medium (30)d) High (17)e) Blind (1)

Aware Experts’ need of control:a) Passive (2)b) Low (13)c) Neutral (24)d) High (10)e) Active (6)

Page 59: Do you trust me? - POLITesi

103102

Part 2 - Research 6 - Building trust.

Careful Novice 5.5% (14)Careful Novice

Fig. 79

Fig. 80

Fig. 81

Fig. 82

Careful Novices’ gender:

a) Female (6)b) Male (8)

c) Not say (0)

Careful Novices’ age:a) <19 (0)

b) 19-25 (2)c) 26-35 (1)d) 36-45 (3)e) 46-55 (5)

f) >55 (3)

Careful Novices’ education

a) Less than high school (0)

b) High school (9)c) Bachelor (2)

d) Master (3)e) PhD (0)

Careful Novices’ worried of

manipulation:a) Yes (13)

b) No (1)

Fig. 83

Fig. 84

Fig. 85

Fig. 86

Careful Novices’ privacy knowledge:a) None (4)b) Low (2)c) Medium (5)d) High (3)e) Expert (0)

Careful Novices’ trust in technology:a) None (1)b) Low (7)c) Medium (5)d) High (0)e) Blind (1)

Careful Novices’ trust in recommendations:a) None (4)b) Low (5)c) Medium (3)d) High (1)e) Blind (0)

Careful Novices’ need of control:a) Passive (3)b) Low (3)c) Neutral (0)d) High (3)e) Active (3)

Page 60: Do you trust me? - POLITesi

105104

Part 2 - Research 6 - Building trust.

Careful Experienced 6% (15)Careful Experienced

Fig. 87

Fig. 88

Fig. 89

Fig. 90

Careful Experienced gender:

a) Female (11)b) Male (4)

c) Not say (0)

Careful Experienced age:

a) <19 (0)b) 19-25 (3)c) 26-35 (1)d) 36-45 (2)e) 46-55 (7)

f) >55 (2)

Careful Experienced education

a) Less than high school (2)

b) High school (5)c) Bachelor (4)

d) Master (2)e) PhD (0)

Careful Experienced worried of

manipulation:a) Yes (14)

b) No (1)

Fig. 91

Fig. 92

Fig. 93

Fig. 94

Careful Experienced privacy knowledge:a) None (1)b) Low (6)c) Medium (4)d) High (2)e) Expert (2)

Careful Experienced trust in technology:a) None (1)b) Low (2)c) Medium (4)d) High (3)e) Blind (0)

Careful Experienced trust in recommendations:a) None (1)b) Low (5)c) Medium (9)d) High (0)e) Blind (0)

Careful Experienced need of control:a) Passive (1)b) Low (1)c) Neutral (2)d) High (6)e) Active (5)

Page 61: Do you trust me? - POLITesi

107106

Part 2 - Research 6 - Building trust.

Careful Expert

Fig. 95

Fig. 96

Fig. 97

Fig. 98

Careful Experts’ gender:

a) Female (19)b) Male (25)

c) Not say (2)

Careful Experts’ age:a) <19 (1)

b) 19-25 (16)c) 26-35 (20)

d) 36-45 (3)e) 46-55 (5)

f) >55 (1)

Careful Experts’ education

a) Less than high school (1)

b) High school (9)c) Bachelor (18)

d) Master (17)e) PhD (1)

Careful Experts’ worried of

manipulation:a) Yes (40)

b) No (5)

Fig. 99

Fig. 100

Fig. 101

Fig. 102

Careful Experts’ privacy knowledge:a) None (2)b) Low (4)c) Medium (11)d) High (22)e) Expert (7)

Careful Experts’ trust in technology:a) None (1)b) Low (5)c) Medium (22)d) High (17)e) Blind (1)

Careful Experts’ trust in recommendations:a) None (3)b) Low (5)c) Medium (29)d) High (6)e) Blind (0)

Careful Experts’ need of control:a) Passive (2)b) Low (3)c) Neutral (10)d) High (13)e) Active (15)

Careful Expert 18% (46)

Page 62: Do you trust me? - POLITesi

109108

Part 2 - Research 6 - Building trust.

Mental Model

Diagram

It is possible to make some assumptions by looking at how the answers to the same questions distribute in the different profiles.

Whit a majority of females in total, it is must be reported a majority of males in all the “Expert” profiles. It is hard to make assumptions on this considered how small the sam-ple of users is and how little the relative weight of this data is but seems to align with some socio-cultural phenomena linked to gender bias, and the cause could reside in cultu-ral, educational, or self-esteem aspects that lead women to underestimate and men to overestimate themselves (Fisher and Margolis, 2003; Hill et al., 2010).

Another demographic aspect that shifts in a quite noti-ceable way from novices to experts is age. Technology natu-rally relates to novelty and adaptation, and it is expectable that, personal inclinations apart, younger generations have an easier time at understanding and adopting new techno-logies. Similar behaviour is noticeable with education, but it is less evident due to the majority of students and the in-fluence of personal inclination and professional field, that mitigates this effect.

It is evident the relationship of concerns about control and manipulation mentioned before, and the level of “pri-vacy care”. People with higher care for their privacy tend to be worried about control and manipulation more easily. It would be interesting to investigate which is influencing the other. Probably they are mutually affected by each other as a careful person will also be more informed (as confirmed by the trend of the question about “privacy knowledge”) and this will lead to a higher awareness of risks and the adop-tion of more protection, closing the loop. This mechanism supports the idea that raising awareness could naturally lead to safer behaviours by users but is also essential to understand how to break this reinforcing loop before it be-comes an excessive source of anxiety.

The need for control expressed by the users is one of the most interesting data. While also the difference in trust va-lues seems to be connected only to the “technology savvi-ness”, The need for control is the only value that depends on both factors and grows together with both “technology savviness” and “privacy care”. While it seems obvious that people that feel more confident with technology are prone to have a higher level of control, it is much less obvious that

even inexperienced users need some control that is acces-sible for their understanding in order to address their wor-ries and is vital to provide the right tools for each level of experience.

The remaining answers, from open-ended questions, that have been cited earlier but have not been discussed yet are part of a more structured analysis made with a Men-tal Model Diagram (see 5.3). In order to compile the map, the first task is to transform all the statement from parti-cipants in a neutral, impersonal way and divide structured answers into smaller chunks of information containing only a single concept. These are the elements later mapped in the diagram. The ones that refer to the same concept are merged into a single element with a visual clue about how frequent they are, to have sight of which of them are more shared concepts than others. Whatsmore, the concepts are labelled with the user profile/s to which original statement belonged. The second task is to cluster these concepts into so-called “towers”, grouping them by topic, and naming each tower after this shared topic. Then a second clustering is operated by grouping towers that refer to the same “men-tal space”. Mental models that refer to the same aspect, stage or situation of the service, activity, or whatever is un-der investigation. In this case, “mental spaces” match with the opinions or processes questioned in the survey:

- Reasons to distrust technology in general- Perceived risks about the use of technology in general- How recommender systems collect data about a user- How recommender systems match an item with a user

profile- How to control the personalisation of a recommender

system- Perceived risks about the use of recommender systems

At this point, once the Mental Model Diagram is com-plete, thanks to the labels, a specific Diagram is made for each of the nine user profiles, by isolating all and only the concepts that include the specific user profile label. The re-sult is ten different diagrams, a general one and nine profi-le-specific diagrams. These diagrams are available in Ap-pendix B.

Mental models’

insights

Profiles’ insights

Page 63: Do you trust me? - POLITesi

111110

Part 2 - Research 6 - Building trust.

As a tool for empathy and analysis, the observation of this diagram is already a good source of insights. More of them will come with the alignment of features and elemen-ts of interaction (see 6.3). To begin, the mental space de-dicated to controlling recommender systems is very weak, with a vast majority of blank answers or “I don’t know” sta-tements. Whatsmore, a lot of the mental models on how to control the systems rely on the idea that a user should try to trick the system, hide, cheat or fake its preferences to di-sguise or to confuse the system, while only a few focused on actions that can drive the interaction with the recommen-der system to their advantage and toward a pleasant expe-rience. The tendency to be defensive and try to trick the sy-stem, align with the diffused presence of “manipulation” as one perceived risk of this kind of technology.

A positive insight is that the issue of “filter bubble” (see 2.5), even if with very different nuances, is perceived by a re-markable amount of participants. This evidence though, is not favourable from the recommender systems perspecti-ve, because it means that they are still not good enough to provide the right amount of diversity and serendipity of content, but is positive that users are well aware and critic about this issue.

Privacy issues are very frequent in answers, but there is the possibility that this fact is influenced by several que-stions about privacy present in the survey, that brought this topic to the attention of all the participants.

Another critical insight is about the mental model of how a recommender system works. Answers about “input” (how the recommender collect data and profiles a user) were more accurate in general than the ones about how the re-commender select content for those profiles, but broadly speaking both questions received confused answers, the two stages of the process of recommendation are not per-ceived, probably because of the black-box nature of this technology (see chapter 2). These not only confirms the need for transparency in order to obtain greater confidence in users but highlight the fact that while the value required from the system (personal data) is well perceived, this is not as well true for the value returned from the system (perso-nalisation). The gap could lead to the perception of an unfair relationship where the user feels like an exploited resource, instead of evaluating an exchange of value.

For what concern the differences among user profiles about mental models of these same processes, is clear that technology savviness has a significant role. Participants with less experience ignore the majority of input mechani-sms in addition to not perceiving personalisation. Even if they could be less receptive about a higher level of control, or even be confused about it, they could benefit from a hi-gher level of transparency that through awareness, could lead to greater confidence towards more complex features for control. The challenge resides in the fact that inexpe-rienced people often are not interested in technology, and they could have an adverse reaction to information overlo-ad. It is a hard time to balance control and transparency for Novices.

Mental Model Diagrams are very big and complex maps full of information that hold plenty of insights waiting only to be noticed. Besides the ones mentioned, more insights will emerge when a new perspective is encouraged by a spe-cific task or a specific design goal. Even better, as a tool for sharing knowledge, this map can become a perfect tool for design teams and be expanded with further user research, if needed.

Contextual insights

6.3 - Define the elements of interaction

After the collection of ground knowledge, it is crucial to consider different applications to observe if this knowled-ge is already applied and how it has been translated into valuable solutions for people. This analysis aims to scan different kinds of experiences in order to spot elements of interaction related to key concepts, gather insights, ben-chmark solutions and evaluate give-and-takes of different practices in different environments.

In order to achieve this, every case study is described on a templated card structured as follows:

CategoryThe case studies will be “Academic”, in case the expe-

rience has been developed during academic research, or “Business if they are part of an on-market product or ser-vice.

Structure of the

analysis

Page 64: Do you trust me? - POLITesi

113112

Part 2 - Research 6 - Building trust.

ImagesOne or more images able to show the aspect of the pro-

duct and the main interactions evaluated.

DetailsA list of detailed information to contextualise the case

in exam. Such as the name, the authorship, year of release, filtering algorithms involved, the field of application or the objects of recommendation.

DescriptionAn overview of the case, how it works, its structure, and

how it is experienced. To provide a complete description and context to the analysis

Key factorsAnalysis of 6 main factors of evaluation (see 6.2.1): Tran-

sparency, Justification, Diversity, Controllability, Context and Cold Start as defined by He et al. in a similar study (2016).

Nielsen Usability HeuristicsNielsen heuristics (see 6.2.2) will be applied to the spe-

cific subject and used as parameters to evaluate the usa-bility of the case, with both written analysis and a score for each heuristic.

InsightsA critique analysis of all the elements evaluated in order

to highlight the most valuable insights emerged.

For the analysis and, more important, for the index of elements of interaction, a conceptual framework of recom-mender systems (Fig. 103) is considered for mapping the main components of this system.

The framework is the same used by He et al. (2016) for their analysis, and its components are:

- User data: all the data and information collected about the user, that contributes to the definition of the user pro-file. This data can include demographic information, pre-ferences, third party information (from cookies or integra-tions), screen time, interactions and user’s online behaviour in general.

A framework of

recommender

systems

- Context: Information about the context of the user. Date and time, the moment of the day, season, location, mood, the device used, platform. This and other information constitutes the environment in which the user is interacting with the service.

- Engine: The recommendation engine is the brain that collects input (User data and Context) and through different processes (see 2.3) generates Medium information that is then used together with input to provide Recommendation. This component is hidden to the mental model of a user, and represent the so-called “black-box”.

- Medium: This is information generated by the recom-mender Engine, when correctly visualised, is the best way a user has to guess the rationale that runs the recommen-der systems. For example, Medium information can be a list of similar users in case of collaborative filtering or a list of possible user interests rather than a structured system of metadata for content-based filtering and others.

- Recommendation: Recommendations are the output of the system. A list of items that are supposed to be a good match for the user.

Fig.103The framework describing a recommender system(He et al., 2016)

Interaction can happen at each component of the sy-stem, influencing indirectly other components, except for the Engine that is usually accessible only by developers. For this reason, as also mentioned when talking about “ac-curacy” and “diversity” (see 3.1), the Engine component will be excluded from the framework to align with users’ mental model and focus on user experience design.

As anticipated, the six key factors for the analysis will be Transparency, Justification, Diversity, Controllability, Context, Cold Start. He et al. (2016) define them in a survey on 24 Interactive recommender systems. In order to include also a consideration about the model about transparency,

Key factors definition

Page 65: Do you trust me? - POLITesi

115114

Part 2 - Research 6 - Building trust.

control and dialogue, these six factors will be differentiated in system-related (Transparency, Justification, Diversity) and user-related (Controllability, Context, Cold Start).

It must be clear that this is a personal contribution to the analysis, and it does not refer to the reference. All of these factors involve both actors in their definitions. The differen-tiation is not intended to exclude either of them from the evaluation.

The differentiation only considers the direction of in-formation: from the system to the user or vice versa. This differentiation is useful to understand better the flows of information in the dialogue between user and system.

TransparencyThe ability to explain or reveal to the end-user the inner

logic of the system. It is information related to the process of getting recommendations.

JustificationThe ability to make users understand the reason why

they are getting those recommendations. It is information related to the content of the recommendation.

DiversityThe ability to provide a broader spectrum of recommen-

dation. It is essential to recommend content that would in-terest the user but different from the one they already con-sumed, also to avoid “bubbles”.

ControllabilityThe ability to involve the user in the process and allow

them to actively add input or feedback to the recommenda-tion process at any point or level.

ContextThe ability to incorporate contextual information into the

process, such as time, space, the number of users or their emotional state.

Warm StartThe ability to address the problem of “Cold Start”. Deals

with the lack of information about newcomers.

System-related

User-related

Nielsen usability

heuristics

Nielsen usability heuristics (Nielsen, 1994a) are ten renowned principles to follow for the evaluation of inte-ractions. They are broad rules of thumb applicable in a ge-neral manner. They will be used to evaluate the quality of the interaction of recommender systems and have insights on how these general principles translate for the specific subject. They are:

“#1: Visibility of system statusThe system should always keep users informed about

what is going on, through appropriate feedback within a re-asonable time.

#2: Match between system and the real worldThe system should speak the users’ language, with words,

phrases and concepts familiar to the user, rather than sy-stem-oriented terms. Follow real-world conventions, ma-king information appear in a natural and logical order.

#3: User control and freedomUsers often choose system functions by mistake and will

need a clearly marked “emergency exit” to leave the unwan-ted state without having to go through an extended dialogue. Support undo and redo.

#4: Consistency and standardsUsers should not have to wonder whether different words,

situations, or actions mean the same thing. Follow platform conventions.

#5: Error preventionEven better than good error messages is a careful design

which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

#6: Recognition rather than recallMinimise the user’s memory load by making objects,

actions, and options visible. The user should not have to re-member information from one part of the dialogue to another. Instructions for the use of the system should be visible or ea-sily retrievable whenever appropriate.

Page 66: Do you trust me? - POLITesi

117116

Part 2 - Research 6 - Building trust.

#7: Flexibility and efficiency of useAccelerators — unseen by the novice user — may often

speed up the interaction for the expert user such that the sy-stem can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

#8: Aesthetic and minimalist designDialogues should not contain information which is irre-

levant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

#9: Help users recognise, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructi-vely suggest a solution.

#10: Help and documentationEven though it is better if the system can be used without

documentation, it may be necessary to provide help and do-cumentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.”

(Nielsen, 1994b)

The analysis involves ten (six Academic and four Busi-ness) recommender system interfaces. The cards with sin-gle cases are collected in Appendix A. The interfaces have been selected among others in order to cover different aspects of interactive interfaces, different balances of usa-bility and key factors, and to touch different issues.

Comparing “Academic” and “Business” cases, one thing that gets attention is a substantial advantage for “Aca-demic” interfaces to achieve good levels of transparency. What is necessary to notice here, though, is not a special quality of the academic development process but rather a substantial unbalance between the complexity of the sy-stems. Where research can focus on transparency goals and involve a circumscribed set of items (Parra and Brusi-lovsky, 2015) to test a simple system with the right amount of people, on the other hand, Business has to deal with se-veral different stakeholders, millions of daily users, a huge

Case Studies Analysis

library of items and a stream of unmanageable data. This data is processed by very complex algorithms (Netflix, Inc, 2020) often involving machine-learning features that are not explainable from the developers either, because the machi-ne develops its own language and properties to process re-commendations (Valve Corporation, 2020). On the opposite, there is an unbridgeable gap on design principles if we con-sider the effort and resources that businesses put into cre-ating a strong brand, iconic products (Spotify AB, 2020) and secure user retention with effortless usability. On the other hand, academic research, that focused only on algorithm accuracy until a few years ago and now is gradually moving its interests towards user experience evaluations, has not involved enough attention from design academics yet.

The need to involve design perspectives in the discussion emerges from several other insights arisen from this brief analysis. First of all, the issue of the target audience and how the interaction can be adapted. As emerged from the analysis of Steam interactive recommender (Valve Corpora-tion, 2020), the approach used to structure interactions can reflect the habits of the target user. In the particular case, a recommender of videogames, intended for players, invol-ves a more active experience like the one of a videogame. In contrast, a recommender of films (Netflix, Inc, 2020) is more proactive and reduce user effort to the minimum to achieve a more passive experience similar to the mindset of watching a movie, which brings to another important topic to discuss. Most of the modern recommender systems, are designed to provide personalisation seamlessly, being very proactive in suggesting something the user can appreciate. However, is the system trying to suggest the best content for the user at that moment or is it just trying to deliver anything that could satisfy them to maximise playtime? This proactivity comes in detriment of user control, and user awareness. As seen in a lot of the interactive recommenders in analysis, allow user control on the system and support a tinkering beha-viour is a perfect solution to stimulate critic thought about the behaviour of the system and in the end elicit transpa-rency (Bostandjiev et al., 2012; Parra and Brusilovsky, 2015; Valve Corporation, 2020; Verbert et al., 2013).

Another contradiction that is possible to notice, in parti-cular in Business cases where usability is better assessed, rises from the use of natural language. The extreme effort

General discussion

and insights

Page 67: Do you trust me? - POLITesi

119118

Part 2 - Research 6 - Building trust.

to speak the user language, and avoid any system-related terminology brings interfaces to stick only with justifica-tion, sometimes even incomplete, ambiguous or superficial. It would be interesting to investigate user models in order to find natural ways to communicate part of the process of recommendation, and use natural language to foster tran-sparency together with justification. This scenario is not impossible as two of the cases analysed are able to explain with quite a common vocabulary the basic ideas that drive the system, in a very synthetic, and effective documenta-tion (Symeonidis et al., 2009; Valve Corporation, 2020).

Another interesting approach to discuss is the form of flexibility found during MoviExplain (Symeonidis et al., 2009) analysis. The platform allows having a deeper under-standing and control of the recommendation with nested information, that avoid information overload and allows ac-cess for more skilled or more curious users. Even if this par-ticular has not an effective execution, it shows possibilities to achieve the flexibility of use not only by alternatives but also with nesting. We find something very similar if we take an overall perspective on the interactions of Google Ads-Settings and MyActivity together as if it is part of the same experience (Alphabet, 2020).

Another interesting opportunity comes from the addition of social networking features, as seen in Spotify (Spotify AB, 2020). Most modern systems make use of collaborative filtering, but justification often refers only at content-based features or ambiguously describe collaborative features. Having the possibility to explicit the presence of another person (being it a brand entity, an expert, an influencer, a friend or just another user) can unveil collaborative-filtering processes. Also, it could be easily mixed with other kinds of justification as seen in Tagsplanation (Vig et al., 2009) giving a complete understanding of the recommendation and also explicating hybrid algorithms. Of course, as for any informa-tion added to the interface, there is the need to tackle the risk of information overload (see 2.1) and balance this kind of complex justification. Whatsmore, if the goal is transpa-rency and awareness, adding friction and increase cogniti-ve load can be even considered an advantage (Beleffi, 2019; Har-Paz, 2019). As emerged from the analysis of CoFeel (Y. Chen and Pu, 2012) though, introducing social dynamics into the system introduce ethics related to the exposure of

self and intimacy. Add complexity and preserve efficacy is a hard task that demand attention and respect to a variety of issues.

From the analysis of case studies, together with a pletho-ra of other examples and prototypes from the literature re-view (Åman and Liikkanen, 2010; L. Chen and Pu, 2012; Dara et al., 2019; He et al., 2016; Herlocker et al., 2000; Kumar and Singh, 2019; Paul and Kundu, 2020; Pu and Chen, 2006; Ricci, 2015; Sun et al., 2019) and online digital products or services, with some contribution also coming from the Men-tal Model Diagram a list of “Elements of Interaction” have been collected. These are interaction patterns and featu-res usable as elementary components for the design of a recommender system, its interface and user experience.

In order to index these elements, the recommender sy-stem framework explained earlier has been merged with the “dialogue for trust” model described in chapter 3 (see 3.1) in order to create charts to organise the elements collected.

Mapping elements of

interaction

Elements of

interaction:

User Data

Fig.104

Elements of interaction chart for User-Data

TransparencyMicrointeractions: “Microinteractions are trigger-fee-

dback pairs in which the trigger can be a user action or an alteration in the system’s state; the feedback is a narrowly targeted response to the trigger and is communicated throu-gh small, highly contextual changes in the user interface” (Joyce, 2018). Microinteractions can show to the user that a piece of particular information about them is being col-

Page 68: Do you trust me? - POLITesi

121120

Part 2 - Research 6 - Building trust.

lected after an action or used to provide recommendations.Profile: User personal profile can be shown and become a

transparent explicit display of all the information collected by the system about the user.

Privacy policy: As legal documentation, the privacy poli-cy should contain all the types of information collected and the way that information is used and collected. It is not an elegant or engaging way, but a sure place for a user to find some transparency.

Justification: Personal information collection can be explicated by integrating it in recommendation justifica-tion. Justification is a text or visualisation that explain to the user which is the source of information that caused a particular recommendation.

Statistics: If the system returns to the user information in the form of statistics or data visualisation, the user can be aware of which information is collected about him and also have some value coming from that information.

ControlOnboarding: An onboarding experience is a stage of inte-

raction that occurs the first time of use and often, for per-sonalised services, includes registration and some variable stages in which the user can give information or setup pre-ferences.

Privacy settings: Give the user control to decide upon the freedom of use of its own information.

Implicit feedback: Implicit feedback trough navigation and internet behaviour are often collected by systems as an input to profile users, if the user is aware of this, they can change their behaviour accordingly to control which input they are giving.

Public data: Some services can use information collected by third parties through cookies or integrations. If the user is aware of this, they can act accordingly.

Unified public profile: To have more control over their profile a user could have a single service that can manage their profile and information, and sort it to other services

DialogueRevisioning: Revision features give the user the possibi-

lity to say the system that a piece of information showed about themselves is wrong and to correct it.

Editable profile: An editable profile can show to the user all the information collected about them and to edit it.

Self-disclosure: Self-disclosure features con provide the user with a free space to reveal something about themsel-ves and give a piece of information that they think could give a better understanding of themselves and their complexity to the system.

Elements of

interaction:

Context

TransparencyJustification: Contextual information collection can be

explicit by integrating it in recommendation justification. Justification is a text or visualisation that explain to the user which is the source of information that caused a parti-cular recommendation.

Location: Location information can be exposed, showing the fact that it considers the information and maybe use for some of its processes.

Time: Time information can be exposed, showing the fact that it considers the information and maybe use for some of its processes.

System status: Correct feedback on system status is the best way to keep the user aware of the contextual condition of the system they are interacting.

ControlMood: The user have some features to express mood and

provide short-term information on their temporary state.

Fig. 105

Elements of interaction chart for Context

Page 69: Do you trust me? - POLITesi

123122

Part 2 - Research 6 - Building trust.

User selection: If a service is shared, the possibility to select which user is interaction gives the system the ability to personalise the experience.

Device: The user can change their experience and impli-citly give information about their context through the use of different devices

Shared account: The users can have features to let the system know that the system is in use by more than one user in order to make the system aware of the presence of a group.

DialogueNatural language dialogue: The system can initiate or

provide a way to nudge the initiation from the user of a con-versation, trough which they can both give and take infor-mation about the context

User-initiated critique: The user could have the possibi-lity to critique a recommendation by providing contextual information while the system previews possible results as a reaction to that information.

Elements of

interaction:

Medium

TransparencyTags: The system shows the tags that define the item.Profiling: The system says to the user to which category

of users they belong or how and where their unique profile is located among other users’ profiles.

Categories: The system shows the categories to which

the items belong.Network: The system shows the social network of the

user and the relationship with other users.Metadata: The system shows the information and cha-

racteristics it has about the items like duration, popularity, rating, and others.

Justification: Medium information can be explicated by integrating it in recommendation justification. Justification is a text or visualisation that explain to the user which is the source of information that caused a particular recommen-dation.

Feature predicted rating: The system tells the user how much it thinks that they will like a characteristic of the con-tent.

ControlExplicit feedback: User can control recommendations

through Medium by giving explicit feedback to an item or a category or a recommendation by leaving a review, rate content, like or dislike or others

Implicit feedback: User behaviour like screen time, con-tent selection, content drop, or others gives much implicit feedback to the system that gathers insight and adapts to these actions. If the user is aware of this, they can decide the act accordingly in order to control the output of the sy-stem.

Follows: By following other users, creators, sellers, in-fluencers, categories and others a user can explicitly tell the system which kind of items they want, or interest them.

Search: By searching a specific item or category, the user is telling the system that they are interested in that and similar items.

Favourites: By adding content to their “favourites” list or “wishlists” user explicitly say what they like to the system, that can recommend similar items.

Personal categorisation: A user could be allowed to crea-te their own categorisation of content by making collections or adding personalised tags and labels to items.

Shared account: Two or more users could share the same account in order to blend their personalisation and influen-ce each other preferences.

Fig. 106

Elements of interaction chart for

Medium

Page 70: Do you trust me? - POLITesi

125124

Part 2 - Research 6 - Building trust.

DialogueScrutability: Scrutability is the possibility of the user to

tell the system it is wrong about something, and suggest a correction about an assumption it made in the creation of a profile rather than a categorisation or the assignment of a characteristic, or others.

System-suggested critique: The system could suggest the user to trade-off some characteristics in order to re-ceive more recommendations or to recommend items that the systems consider a better fit for them. At this point, the user can tune these characteristics at their preference to compromise with the suggestion.

Natural language dialogue: The system and the user can engage a conversation to give and take information about preferences, characteristics of the items or user profile

Elements of

interaction:

Recommendation

TransparencyRationale: The system explains or shows the rationale

behind the selection of recommendations.Sources: The system shows which are the sources of in-

formation that are responsible for the recommendation.Documentation: Documentation, help, and F.A.Q.s

explain how and why content is personalised and recom-mended.

Influencer: The system explicit that an influencer recom-mends content. The influencer can be an expert, like a critic or a creator or even a famous and recognised personality.

Justification: Justification is a text or visualisation that explain to the user which is the source of information that caused a particular recommendation. Justification can make explicit the presence of personalised content and that an item or a set of items are tailored for the user.

Editorial: The system can tell the user that some content is part of an editorial project and so is directly recommen-ded by the service, for example, because it is original or pro-prietary content.

ControlParental control: Users can filter inappropriate content

for children so that the system shows items suitable for kids.

Tune recommendation methods: The user can select the recommendation methods involved, or tune their weight or parameters to guide the recommendation as they think could be more adaptable to their needs.

Suggest a recommendation rationale: The user can nud-ge the system to receive recommendations based only on a specific rationale.

Shared account: Two or more users could share the same account to influence each other’s recommendations.

Select Sources: The user can select a set of items, other users, tags, categories or others as the pool of sources to receive a specific recommendation.

DialogueExploration: The user and the system engage in an ite-

rative process of selection and recommendation to explore possibilities and arrive at the right item step-by-step by si-milarity.

Natural language dialogue: The system and the user can engage a conversation to give and take information about needs, opportunities and evaluation of recommendations.

User-initiated critique: The user suggests that a recom-mendation can be perfectioned and why and the system provides alternatives similar to the previous one but sati-sfying the observation made by the user.

System-suggested critique: The system suggests that by trading-off some features, better items can be recom-mended. The user is free to select which features they are willing to trade-off and receive the new recommendation.

Fig. 107

Elements of interaction chart for

Recommendation

Page 71: Do you trust me? - POLITesi

127126

Part 2 - Research 6 - Building trust.

Mental models

alignment

Fig. 108

A portion of the mental model

diagram showing the alignment of elements

of interaction

Once the index of elements of interaction is ready, they can be aligned with mental models in the mental model dia-gram (Fig. 108), visualise correlations and use the map as a tool for further analysis and insights.

For the ideation phase, the goal is to involve the partici-pation of users to investigate the topics better and brain-storm some ideas for the next phases. First, it is necessary to identify the perfect candidates for the task.

In order to select the right participants, their profi-les should have been the most representatives for each user-profile drafted from the survey. Nine people, one from each of the nine profiles, selected among the participants of the survey have been contacted.

The selection followed, on the one hand, the best match possible with the demographics of the specific user-profile and on the other hand the achievement of the greatest pos-sible variety among them, in order to have a sample of users that can guarantee a satisfying variety of mental models, needs, and approaches.

Careless Novices had a majority of females, over 46 ye-ars old, with high school level education. Their relationships with technology and trust are a low knowledge about pri-vacy issues, a medium trust in technology in general, a trust on recommendations shifted on low values and an avera-gely distributed need for control.

Sandra, the representant of this profile is a retired wo-man over 55 years old, with a high school level education. She has any knowledge about privacy issues, a medium trust of technology and a low trust on recommendations, with a medium need for control.

Careless Experienced had a slight majority of females, around 30 years old, with high school or higher education. Their relationships with technology and trust are a low knowledge about privacy issues, a medium-to-high trust in technology in general and averagely distributed values for both recommendation trust and need for control.

Marco, the representant of this profile is an unem-ployed man between 26 and 35 years old, with a bachelor in astrophysics. He has any knowledge about privacy issues, a medium trust of technology and a high trust on recommen-dations, with a medium need for control.

Careless Experts had a slight majority of males with the most significant presence of gender neutrals among the profiles, between 19 and 35 years old, with university-level education. Their relationships with technology and trust

6.4 - CoDesign with the users

Participants’ profiles

Careless Novices:

Sandra

Careless

Experienced: Marco

Careless Expert:

Francesca

The reason for this mapping is to be able to use it for brainstorming ideas, evaluate certain aspects of an existing interface or to put together all the elements for a prototype.

Listing random insights would not make any point at this stage. The list would rather be a tool to explore when there is a specific issue to solve or a concept to investiga-te. A map of users believes and design solutions to roam in order to find insights for a specific goal, not only because the quantity and variety of insights are hard to manage wi-thout a purpose, but mostly because new and more relevant insights arise when the map is put into perspective by the specific context. This map will come at work later in the re-search when brainstorming ideas for prototyping.

The next steps of the research intend to validate the efficacy of these elements as guidelines for designing tru-stworthy experiences with recommender systems and un-derstand which of them carry the most significant opportu-nities for future implementations into services.

Natu

Ju

SScrutability

Search

Natural languagedialog

Navigation

Statistics

Privacy policy

Mental models

Elements of interaction

Frequency of visits

Navigation habits

View time

Implicit feedback

Search Keywords

KW connected toKW of my interests

Keywords

Keywords

Da

Oftean

Cp

Page 72: Do you trust me? - POLITesi

129128

Part 2 - Research 6 - Building trust.

are a low-to-medium knowledge about privacy issues, high trust in technology in general, and averagely distributed va-lues for both recommendation trust and need for control.

Francesca, the representant of this profile is a doctor between 26 and 35 years old, with a Master degree in medi-cine. She has a medium knowledge about privacy issues, a high trust of technology and a medium trust on recommen-dations, with a high need for control.

Aware Novices had a majority of females, over 36 years old, with mostly high school level education. Their relation-ships with technology and trust are a low-to-medium know-ledge about privacy issues, a medium trust in technology in general and a trust on recommendations and need of con-trol shifted towards low values.

Paola, the representant of this profile is a Technical Draftswoman between 46 and 55 years old, with a high school level education. She has low knowledge about pri-vacy issues, low trust of technology and any trust on recom-mendations, with a medium need for control.

Aware Experienced had a majority of females, between 19 and 35 years old with a peak between 46 and 55 years old, with mostly high school or bachelor level of education. Their relationships with technology and trust are a medium knowledge about privacy issues, a medium-to-high trust in technology in general, an average trust on recommenda-tions shifted just a bit towards low values and an averagely distributed need for control.

Cristina, the representant of this profile is a female fa-shion designer between 19 and 25 years old, with a bachelor degree in fashion design. She has medium knowledge about privacy issues, a medium trust of technology and a high trust on recommendations, with a high need for control.

Aware Experts had a majority of males, between 19 and 35 years old, with mostly university-level education. Their relationships with technology and trust are a medium-to-hi-gh knowledge about privacy issues, high trust in technology in general, medium trust on recommendations and an ave-rage distributed need for control.

Mirko, the representant of this profile is a male resear-cher, between 26 and 35 years old with a PhD in computer science. He has medium knowledge about privacy issues, an extreme trust of technology and a medium trust on re-commendations, with a medium need for control.

Careful Novices had a majority of males, over 36 years old, with high school level education. Their relationships with technology and trust are an averagely distributed knowledge about privacy issues, a low-to-medium trust in technology in general, a trust on recommendations shifted on low values and an average distributed need for control.

Riccardo, the representant of this profile is a male lawyer, between 36 and 45 years old, with a master de-gree in law. He has high knowledge about privacy issues, a medium trust of technology and recommendations, with a high, active need for control.

Careful Experienced had a majority of female, mostly between 46 and 55 years old, with high school or higher-le-vel education. Their relationships with technology and trust are an averagely distributed knowledge about privacy is-sues, a medium trust in technology in general, a trust on recommendations shifted on low values and a high need for control.

Tiziana, the representant of this profile is a female spee-ch doctor between 46 and 55 years old, with a bachelor de-gree in her field. She is expert about privacy issues, has a medium trust of technology and recommendations, with a high need for control.

Careful experts had a majority of males, between 19 and 35 years old, with university-level education. Their rela-tionships with technology and trust are a high knowledge about privacy issues, a medium-to-high trust in technology in general, a medium trust on recommendations and shifted towards high or very high need for control.

Simone, the representant of this profile is a male bache-lor student of economics between 19 and 25 years old. He has low knowledge about privacy issues, a high trust of te-chnology and recommendations and a high need for control.

As shown, participants try to reflect as well as possible the composition of people of their relative profile. When this match is not perfect is due to two reasons. First, availability of people, since not all the participants to the survey were willing to participate again in the research. Second, the need to guarantee an even distribution of characteristics in the composition of the final group, to guarantee balan-ced feedbacks and a good representation in general. The following (Fig. 109) is the final composition of the group of

Aware Novice:

Paola

Careful Novice:

Riccardo

Careful Experienced:

Tiziana

Careful Expert:

Simone

The group

Aware Experienced:

Cristina

Aware Expert:

Mirko

Page 73: Do you trust me? - POLITesi

131130

Part 2 - Research 6 - Building trust.

Fig. 109

A chart of the composition of the group based on the

nine representant

Unstructured

interviews

First part:

General topics

users.The next stage of the research has been to conduct an

interview with each of the nine representatives selected. The interviews occurred between the 6th and the 10th of March 2020. Each interview lasted on average an hour, and they have been conducted by video call. Each interview has two distinct phases.

The first half of the interview investigate the general to-pics of the research such as trust, relationship with tech-nology and with recommendation and personalisation. The-se topics have been discussed in particular in relationship with the profession or the social role of the interviewed, to be able to dig into the context of the user and better un-derstand their culture and mental models, but also to put them into the position of talking about something they know well while discussing the topics I suggest. Therefore, even if the general questions prepared were things like “What does it mean trust for you?” “Can you define trust?” in case of particular profession these became “How trust is invol-ved in your job?” or in some cases even very specific, for example, “How do you build trust with a client/patient?” for the lawyer or the doctor. Alternatively, for example, with

a speech therapist, the conversation focused around the concept of dialogue, and which are the techniques to sti-mulate communication with a person. Again, with a PhD in computer science, technical or advanced issues about te-chnology and artificial intelligence are approachable. These examples are to underline that even if the seeds of the que-stions are the same for everyone, being these unstructured interviews, their development has been highly personalised on the specific user to maximize the qualitative output of each of them rather than prioritise the ability to compare their answers.

The second half of the interview was an exercise of Co-Design with question focused on stimulating the discussion about functionalities and recommendations of renowned online entertainment services. A first question was asked at the beginning of the interview, to notice any difference caused by the discussion approached in the first half. In the main phase, the conversation started on the base of the beginning question or based on the general discussion in-volving one or more outstanding services like Netflix, Spo-tify or Youtube and Google services. Questions for starting conversations are mainly about “What do you feel the need for in such a service?”, “Is there something you like or di-slike in particular?”, “Would you do anything differently?”, “Is there something hard to understand?”, “How do you use this service normally?”. From a starting point, questions de-velop to dig in motivations or to discuss possible solutions, bring ideas from different services, understand preferen-ces, and others. Even this second part of the interview was not standardised. It was personalised based on on the first half of the conversation and the answers the specific user gave during the survey. These references helped follow the conversation and exploit the diversity of the group to gather different insights.

The results from the interviews are in part confirming concepts gathered from the literature, in part begin to con-firm some of the hypothesis of the research and revealing a lot more about the people involved and about some possible development for the prototyping phase.

About trust, it is interesting that if questions were more related to the act of defining it, people were confident, they grasp well the concept of trust, but they end up defining trust with itself, with tautologies. So trust is a well-absor-

Second part:

CoDesign

General discussion

insights

Page 74: Do you trust me? - POLITesi

133132

Part 2 - Research 6 - Building trust.

bed concept in people’s mind, but its definition is elusive. Maybe just not so aware. When the question about trust involved a description of tools or good practices to build it instead of a simple definition, all the main elements of the model defined in chapter 3 arose implicitly from the an-swers. Almost anybody talks of communicating competen-ce, therefore give insurance of Ability for their role. Simone, the student of economics, strongly defined this concept, associating the concept of trust with the economic concept of reputation, that reduce to the Ability to respect the terms of contracts with other entities. A part of the interviewed, mostly the ones that are somehow or partially advisors in their professional roles, stress the idea to make others understand that their interest is the priority, a good repre-sentation of Benevolence. Cristina, the Aware Experien-ced, also consider a temporal factor, of an established and long-term relationship, somehow present also in the idea of Riccardo, the lawyer, of having a returning client. The idea of being able to establish a relationship involves a vision in common or at least a share of values that can be associated with the missing factor of trust, Integrity.

Apart from the presence of the three factors of trustwor-thiness from Mayer’s model (see 3.3) it is even more intere-sting that among tools for building trust also transparency and control appear in the answers. Transparency is almost explicitly present in the answers of Marco, Francesca, Pa-ola, Riccardo and Tiziana that all express the need to be transparent about themselves and their actions or choices and behaviours. Transparency seems to be a major carrier for those three fundamental factors of trustworthiness. Francesca, Paola and Tiziana (Careless Expert, Aware No-vice and Careful Experienced respectively) also express the importance of being able to create a space for the other per-son, to give their patients or clients a degree of freedom to express themselves. This freedom of expression, this “spa-ce” for people to be able to reciprocate the relationship with them, is the perfect representation of the concept of con-trol depicted in chapter 3, as a set of tools to communicate oneself to the system. An important detail comes from the experience of Tiziana, that is a speech therapist: it is im-portant to leave to people space to express, but is as much important to respect their silence if they are not willing to communicate and accept the situation by being clear and

transparent about intentions and feelings. All their ideas about listening and give free space, remark

the importance of control as a tool for expression and com-munication, as a way to reciprocate and build trust throu-gh the mechanism of dialogue. Although, that last concept from the answer of Tiziana, underlines the importance of balancing the stream of communication between transpa-rency and control; where there is a lack from one side of the communication, the system should be able to adapt with an increase on the other side.

When touching the topic of technology, the first thing that emerges is the lack of trust in those fields of appli-cation where the person feels that a qualitative, sensible, evaluation is at stake. For example, because of the impossi-bility to measure the quality (Francesca, doctor) or because of a component of self-expression (Cristina, fashion desi-gner). In general, trust in technology seems to be very much dependant on the control a person has on it. Novice users with their inexperience tend to be a lot reluctant to trust te-chnology because “it’s too hard, I’m not good at it” (Sandra, Careless Novice), “I’m not interested” (Paola, Aware Novi-ce) and “I’m worried about the idea of delegate decisions to a computer” (Riccardo, Careful Novice). However, also Expert users, are willing to trust technology only in areas of expertise where they can critically weight the decision of the system and have their last word about it. This approach remarks the need to have enough transparency to make a user aware of the factors at stake and also to deliver some degree of knowledge about the matter, but give them the control to have the last word on the decision.

Talking of the second phase, codesign brought a lar-ge variety of ideas to develop. This activity has been a lot easier with experts rather than Novices because of their confidence with the medium. They had an easier time envi-sioning possible solutions and discussing the services they use, resulting in much more detailed proposals coming from Experts. In order to achieve an equal inclusion of Novices’ unspoken needs, features will be designed based on more general behaviours expressed from the overall interview.

A topic discussed with many (sometimes in a structured way other times in a more natural discussion) is the origin of the recommendation, what does it change if the recom-mendation comes from a close friend, an authority like an

CoDesign outputs

Page 75: Do you trust me? - POLITesi

135134

Part 2 - Research 6 - Building trust.

expert, an unknown user similar to us, or directly from the recommender system. The preference since to be personal but even more looks to be situational. In the case of a pas-sionate of the content (whether they are movies or music) the expert, or the authority seems to have a preference be-cause the passionate user is often in search of the opinion of someone that he can consider more expert than them. It also works with esteemed friends since being passionate could bring to share closer relationships with people with similar interest. Friends are the most likely to be conside-red the best option by anyone. They already accomplish to meet all the characteristics for trustworthiness matured during the friendship. Suggestions from similar unknown users (collaborative filtering) and from the service itself (content-based) felt as less valuable to them and preferred only in situations where the comfort zone of known and se-cure contents is a priority over the ability of friend or exper-ts to address a greater diversity, a chance of serendipity or a general sense of novelty in the content suggested. People involved in the interview seemed to value more this novelty rather than accuracy, as supported by other researches in user experience evaluation of recommender systems (see chapter 3). The result is the need to have a flexible selection of sources to accommodate different users and situations.

Need for transparency arose explicitly from Cristina (Aware Experienced) and Riccardo (Careful Novice) and Mirko (Aware Expert) that supports the idea of displaying a weighted list of source features for recommendation and making more transparent machine learning mechanisms.

Sandra (Careless Novice), Marco (Careless Experienced) and Simone (Careful Expert) say that they approach cata-logues already knowing the exact content are searching. Therefore, they would rather have recommendations on a separate, dedicated part of the service as explicitly suggest Marco and maybe have a better research tool.

Francesca (Careless Expert), Mirko (Aware Expert) and Tiziana (Careful Experienced) instead, describe a more ca-sual explorative behaviour that is supportable with dedica-ted features.

Concerning transparency Francesca (Careless Expert) discuss the quality of information and metadata of the con-tent and suggest the possibility to have more in-depth in-formation to scout and exploit the explorative behaviour.

Mirko (Aware Expert) showed much interest in explicit feedback that he claims “not being smart or relevant enou-gh for now”, he supports the use of simple explicit feedback to communicate his complexity to the system by being able to invest some time to enrich the system with personal in-formation. However, he claims this practice to be ineffecti-ve.

Simone (Careful Expert) stress a lot on the possibility of having more community or social features, with the presen-ce of friends to follow, the creation of personalised lists and categories and share and exchange content with his closest friends.

These features, with an interpretation of thoughts and behaviours from the rest of the interviews, will be the star-ting point to take advantage of the elements of interaction collected in the definition phase and proceed with the im-plementation of prototypes and testing these ideas out.

Times were mature for the application of the analy-sis made, testing the effectiveness of the elements of in-teraction in practice. Prototypes originate from different starting points, associated with a selection of elements of interaction and then developed for testing. The results are discussed in a “virtual” focus group, held on a group chat, with the participants of the interviews. The iterations pro-gressed thanks to the discussions while collecting insights also for the research. The prototypes take the form of re-designs of the Netflix desktop interface. Reasons for this decision are, first, that all the participants know it, avoiding the need to explain all the core functionalities of the service and focus only on the feature designed for the research. Se-cond, Netflix is a popular service and, with its established reputation, avoids to consider in the evaluation of trust, the possible negative effects of the proposal of a new service, with a new, untrusted, brand. Last but not least, Netflix’s interface has a strong focus on content both in functiona-lities and in screen space, with very few unique features, making it the best playground to design new solutions wi-thout competing with the comparison with the real service. For the focus group, the virtual space of a chatroom was the best choice to prioritise the involvement of the same users

6.5 - Prototype, test, iterate

Page 76: Do you trust me? - POLITesi

137136

Part 2 - Research 6 - Building trust.

selected with specific requirement during the previous pha-se, since they are geographically remote and with very dif-ferent schedule due to the variety of their profiles.

The first iteration aimed to diverge and explore different applications of the concepts. It is made out of eight diffe-rent exercises, with no more than one screen able to com-municate the core of the specific concept, created with the use of small sets of elements of interaction. The following is a description of them, followed by a discussion about the feedback received from the focus group.

Concept 1: Social

First iteration:

prototyping

This concept introduces some social features, inte-racting with a network of selected friends to get recommen-dations. Friend’s lists can be part of the rows that Netflix shows in the main pages. Friends can be taken into account as justification for a recommended content. A side menù shows what friends are watching for inspiration or maybe to inform that the last episode of the favourite series is a good topic to talk about at the upcoming meeting

This concept generates from the discussion about friends suggestion during the interviews and experiments the effects of showing more explicitly, the source of a colla-borative filtering approach as a mean for transparency, and gives the user the ability to select the sources (their friends) as a mean of control. Dialogue is established. The system explains the influence of the network while the user can express a preference on the nodes of this network, refining its effects.

Concept 2: TV mode

TV mode is a concept coming from a behaviour noted du-ring the interview with Paola (Aware Novice) about the use of television. Her approach is uninterested, with the televi-sion schedule going just for companionship. The idea is to have a more proactive behaviour from the system, that stre-am recommended content without the decision of the user, while the user can focus more on their context or mood by selecting a “channel”: a group of content associated to a particular characteristic or mood. The user has then the possibility to stick with a particular series they are liking by activating the “serial” option, that brings Netflix to the tra-ditional way of reproducing all the episodes of the same se-ries one after the other. The system is transparently random and communicates the characteristics of a possible change

Fig. 110

Fig. 111

Fig. 112

Elements of interaction used in

concept 1

A screenshot of concept 1

(next page)

Elements of interaction used in concept 2

Page 77: Do you trust me? - POLITesi

139138

Part 2 - Research 6 - Building trust.

of direction. The user can drive this direction by choosing the mood or characteristic the best fit their mood. The dia-logue established is comparable to a friend or relative “zap-ping” on the television while discussing with you what do you want to watch together.

Concept 3: Profile

The third concept is about the creation of a space to dialogue about the user profile. A profile page where the system tells the user which are the interests that it lear-ned about them, and in some way shows them how these interests impact on the perception the system has of them or the category of users they belong. The user can edit the interests and have a personal space where, apart from the classic “my list”, they can create personalised collections

to organise their favourite content and see in real-time how these changes impact the perception the system has un-derstanding better the relationship between content and profile. The system is being transparent by showing its re-presentation of the user. The user can edit and fix this re-presentation. The dialogue established is one of getting to know each other, with the system communicating what it learned about the user and the user letting him know so-mething more, or that its guessings were somehow wrong.

This concept introduces a personal assistant. It gene-rates from ideas of dialoguing recommender systems from different researches of the early years of the millennium. At the time, they were just very complex wizards to guide

Concept 4: AI assistant

Fig. 114

Fig. 116

Elements of interaction used in

concept 3

Elements of interaction used in concept 4

Fig. 113

A screenshot of concept 2

Fig. 115

A screenshot of concept 3

Page 78: Do you trust me? - POLITesi

141140

Part 2 - Research 6 - Building trust.

through filtering processes. Today, with the development of AI and conversational agents, seems like an obvious solution. In this case, the dialogue is the interface itself. The assistant asks questions or answers to inputs, to sug-gest content, discuss interests, profile data, or understand mood to give the perfect recommendation for the situation.

Concept 5: Contextual dialogues

of them in a specific way and that the user can correct if any of the information created from the interaction is wrong or unnecessary. All of this happening as contextual feedback for a particular interaction, when possible. The system is being transparent by making explicit the processes that are most related to the user. Furthermore, the user can have the last word and control the information that the system creates about him. The system gives the user space for a better, more relevant self-disclosure while revealing its functions.

Concept 6: Advanced research

The idea of contextual dialogues is to give a deeper layer of information about system processes and motivations, in the most contextual way possible. It sparked from the need of Riccardo (Careful Novice) to have the system guiding him step by step and give him complete control of the data col-lected. It consists of dialogue boxes that explain to the user that a specific interaction, influence the system perception

Fig. 118

Fig. 120

Elements of interaction used in

concept 5

Elements of interaction used in concept 6

Advanced research is a feature to make less specific re-searches for content by using elements that are the same that the system uses to recommend content. From a par-

Fig. 119

A screenshot of concept 5

Fig. 117

A screenshot of concept 4

Page 79: Do you trust me? - POLITesi

143142

Part 2 - Research 6 - Building trust.

ticular perspective, it is like the user is manually collecting the set of sources for the system recommendation. The user can tinker with the system, empathise with the processes that bring them personalised content, understanding the system while having full control over the recommendation rationale. The dialogue of this experiment is less direct. It is like the system asking the user to “be in its shoes” and, by doing this, the user is communicating which is for them the best way to perform that process.

Concept 7: Recommender settings

the specific moment, in this case, the settings are perma-nent, and it is more a way for the user to tell the system how does it want their recommendations, and let the system do its thing. The system is being transparent by exposing its functions and the user has the control to manipulate them to see how this effect the recommendations. It is a tinkering process where both get to know more about each other.

Concept 8: Exploration mode

Fig. 122

Fig. 124

Elements of interaction used in

concept 7

Elements of interaction used in concept 8

Similar to the advanced research, in this concept, the system lets the user tune the rationale of the recommender engine. While the advanced research is more of a contex-tual process, in which the user expresses their needs for

Exploration mode is a particular mechanic for discovery inspired by the exploration behaviours discussed with Fran-cesca (Careless Expert), Mirko (Aware Expert) and Tiziana (Careful Experienced). It is a mode in which the user passes from an item to another, creating a chain of content. At the top, there is the list of the items the user passed through in

Fig. 123

A screenshot of concept 7Fig. 121

A screenshot of concept 6

Page 80: Do you trust me? - POLITesi

145144

Part 2 - Research 6 - Building trust.

this exploration session, they did not choose, but they can be selected again if the user changes their mind. In the mid-dle, there is the item at stake, with detailed explanation and a trailer, the possibility to play or save it for later. At the bot-tom, a selection of new titles, to continue the exploration with explicitly stated the relationship between the current content and the next possibilities. In this way, the system is showing to the user some of the rationales of content-based filtering. The user can communicate contextual preferences through the navigation and communicate the system which rationales align better with their way of thinking. Together they dialogue going down a road of items like two friends di-scussing what to watch and refining their choice by digging into similar content or by switching to very different items when they get stuck in a dead-end.

First iteration:

feedback and

insights

Second iteration:

prototyping

prototype by making them organic and merging into a com-plete set of consistent features.

TV mode (concept 2) felt like a contradiction to the ser-vice itself by most of the participants. Participants unani-mously disregarded the personal assistant (concept 4) be-cause this kind of technology is still experienced as stupid, their conversation skills inadequate and not a comfortable interaction in general. Contextual dialogues (concept 5) had problems with cognitive load: most of the participant felt that they add too many interactions, too much distraction or that they are redundant and unnecessary. Recommender settings (concept 7) was totally unclear to participants. The fact that they ignored the functioning of a recommender sy-stem made this menu useless as they cannot grasp the ef-fect it can have. The reason is that they do not even include the presence of a recommender engine in their mental mo-del and feel like they do not know what they are controlling. Exploration mode (concept 8) raised curiosity, although participants partially misunderstood the feature that lost the interest of many.

Overall it feels like the participant preferred concepts most familiar to them based on concepts shared by other services like social networks or search engines. Whether the ones with the highest degree of novelty achieved less consent because of the difficulty at grasping their possibi-lities.

In the second phase, instead of testing the efficacy of small sets of elements, the aim is to test the impact of a single organic prototype, capable of bringing together the elements of the three selected concepts from the first pha-se, together with some single dubious elements saved from the other concepts and some new elements deliverable only with a dynamic prototype like this.

The prototype explores the transformation of the home page due to social features and editable profile, the imple-mentation of social features in the advanced research and personal profile and the development of the profile itself.

In the home page (Fig. 126), the traditional rows of items get a new piece of information other than the title: an image that distinguishes the rows belonging to Netflix recommen-dation system (Netflix logo), the collections belonging to the user (squared profile image) and collections belonging to friends (circled friend image). Netflix lists are the same

This first experiment aimed to try out as many ideas as possible, analyse the impact of these small selections of elements of interaction and choose the most successful ones to merge into a more sophisticated prototype for the next phase. The concepts that achieved more interest were:

Social (concept 1);Profile (concept 3);Advanced research (concept 6).These will become the basis for the design of the next

Fig. 125

A screenshot of concept 8

Page 81: Do you trust me? - POLITesi

147146

Part 2 - Research 6 - Building trust.

as usual. Personal collections can be created in the profile page, and in case they have few elements, Netflix suggests relevant contents to expand the collection. Friends’ col-lections are accessible from the list of friends in the profile page and come with two options, one to make an editable personal copy of the same collection like any other personal collection, the other to follow the friend’s collection to fol-low the development of the list in the future. The side menù with the items currently watched by friends is accessible by a button in the navbar.

The advanced research (Fig. 127) is accessible also from the navbar, and it opens the filtering options and results that updates in real-time anytime a filter is selected. Filters include the format of the content (Film, TV series, Docu-mentary, and others), the category (Comedy, Sci-fi, Roman-tic, and others)and metadata (Funny, Award-winning, and others). Then the possibility to select one from personal col-lections or recently watched items as a starting point that will also set the other filters accordingly. Last, the source of recommendation considered for the research, selecting among Netflix recommender system and friends.

The profile page (Fig. 128) keeps the summary of the pro-file characteristics on top as in the first concept, below it there is the list of friends and then the list of personal col-lections, with the possibility to create new ones. By clicking on a friend’s image, the user can see the collections of that friend, and make a copy or follow them. On the side of the summary, a button switches the view from the list of friends and collections into a history of content viewed, liked or sa-ved. Inside this screen, there is the opportunity to delete an item from the content list and the interests list, shown aside with interests relative weights. Revisioning both the titles or the interests, the summary updates in real-time, to suggest the direct relationship between these elements.

The concept of dialogue boxes has been reintroduced in a less invasive way with timed, self-closing boxes coming from the top. In the specific case, for this prototype, it has been introduced a dialogue that shows how the user’s in-terests change when they save an item to the personal col-lection. Instead of having the possibility to edit this change directly from the box, a button brings the user to the intere-sts section of the profile page.

Fig. 126

Fig. 127 Fig. 128

A screenshot of the home page of phase 2

A screenshot of the adavanced research

in phase 2(next page)

A screenshot of the profile in phase 2(next page)

Page 82: Do you trust me? - POLITesi

149148

Part 2 - Research 6 - Building trust.

The second iteration was very well received. At this point, the participants of the study began to express actual feedback of appreciation. They felt like the interface was clean, well organised, and helpful also thanks to small pre-cautions like explicating the source of the recommendation in the title of collections. Even some features, that were dubious in their static representation, made perfect sense to everybody when showed in this dynamic interactive way. Advanced research was the most appreciated feature. Ha-ving the possibility to communicate the context, through momentary personal preferences to the system and guide its recommendation to match the needs of the moment, lo-oked like the most natural and most obvious way to get as fast as possible to the piece of content that best suit the situation.

The dynamism of the interactive prototype was also able to reconsider the dialogue box. In this situation, nobody even noticed it even if the presentation stressed its presen-ce. It just seamlessly felt a legit communication from the system.

The overall experience was well-received, but several di-scussions raised, giving space for improvement.

Social features were confirmed, but the side menù felt unnecessary to participants that also agreed about the in-vasiveness of friends’ collections on the home page. They suggested the need to have a dedicated space or at least the

Second iteration:

Feedback and

insights

Third iteration:

prototyping

possibility to opt-out or control the presence of friends’ col-lections. The profile space was appreciated together with the new page for interests that were more contextualised. Participants appreciated having control over their personal information, through the profile page and all its features, but this possibility revealed the need for a higher degree of protection. Some participants discussed the need to have more distinct access to their profile, to feel safer about the privacy of their personal space, while others brought up a concept that also diffused in the “control” mental space from the survey: Incognito Mode. Some of them thought that having the possibility to avoid personalisation and roam a neutral space with the catalogue offered in an unbiased fashion would be an excellent option. Unfortunately, even for some of them was clear that such a function would be entirely unreasonable for the business goals of the service. Although, these experiments focus on the effects of tran-sparency and control on trust, for this reason, business go-als will be set aside for a moment. Even if counter-intuitive, being incognito a tool for control that leads to a closure of communication, it can still be an effective solution. As said by Tiziana (speech therapist, Careful Experienced) in order to build a trustful relationship “...you must be ready to re-spect the other even in the decision of not communicating”.

Considered the success of the previous iteration, the third and last one is just a refinement phase based on the existing prototype. Following the feedback, the landing screen (Fig. 129) with user selection is now part of the pro-totype, and together with the four users that can share a Netflix account and the traditional censored account for kids, there is the new option to access in incognito, in a part of the service where the catalogue has only categories, and the only recommendations are the one of popularity.

Back into the personal account (Fig. 130), the profile page is entirely reorganised. Personal collections, friends’ collections, and history and interests are divided more cle-arly. Friends’ collections also disappeared from the home page unless followed. The side menu with the content wa-tched by friends is removed. At its place, following a di-scussion from the focus group, when many friends suggest an item and the user clicks on friends’ images, a new side menu shows the list of these friends. Other small details are improved like icons or interactions.

Fig. 129,130 User selection and the new profile in phase 2(next page)

Page 83: Do you trust me? - POLITesi

151150

Part 2 - Research 6 - Building trust.

At this point, since it was the end of the research, it was time to collect some evaluations and see the results of the design over the three key values: transparency, control, and trust.

To put it on grades, and making an average, participant graded transparency 4.3/5, control 4.7/5 and trust 4.2/5. Values are pretty high overall, so, the experiment is a suc-cess. However, this evaluation alone would be sterile. More relevant insights come from the considerations about them.

Third iteration:

feedback and

insights

One interesting point is that the bigger half of the par-ticipants explicitly described a solid relationship between the three concepts, that they influence each other, and this is proof of the value of the model hypothesised. Some of the participants were at ease using the concept of trust, and they preferred to refer to it as confidence or loyalty. Althou-gh all of the participants felt that they could rely on the re-commendations. Considered that the aim was to enhance trust in recommendations and not in the overall service, this can is a big success and a proof that with the correct elements of interaction, focusing on the creation of dialo-gue, is possible to improve trust in recommendations.

Transparency was and brought some useful insights. The complexity of the concept seems clear from the resul-ts. Few participants expressed the issue that even if they felt an increase in transparency and a satisfying level in ge-neral, full transparency is not achievable due to their lack of understanding of technological processes. While all of them seemed satisfied, some of the participants conside-red “transparent” the interface only because it lets them quickly understand how to control, and how to use the fea-tures. This consideration doubts the fact that they got the concept correctly but suggest the idea that the influence of transparency on control is confirmed.

Control was very appreciated, all the participants were thrilled about the new features and this side of the experi-ment, with the focus on self-disclosure, in particular with the possibility to involve context and situational factors, was the most successful.

In the end, the experiment has a successful result. Some things could be further investigated, but the findings are satisfying and the process, together with insights, is worth to be translated into design guidelines to be used in future research or during the design of a recommender system.

Page 84: Do you trust me? - POLITesi

Dialoguing systems

Design guidelines for trustworthy recommendations

Output

Page 85: Do you trust me? - POLITesi

154

DialoguingsystemsDesign guidelines for trustworthy recommendations

7.1 - Dialogue is all about balance156

7.2 - There’s more than just recommendations160

7.3 - To each their own asset164

Dialogue revealed as an effective approach and respected all the expectations. The following is a synthesis of all

the learnings of this experience. The principles can be fur-ther explored, applied to different contexts and now that the first instance of this approach has been validated, it can be the beginning of a series of developments to give it a structure and make it a solid strategy and a tool for desi-gning trustworthy experiences for recommender systems. In this chapter follows the setting of a list of guidelines ba-sed on the findings of the research and the concepts and principles that guided design defining a starting point for the development of the aforementioned Dialogue approach.

155

Page 86: Do you trust me? - POLITesi

157156

Part 3 - Output 7 - Dialoguing systems.

First, a discussion about the key concepts of this appro-ach. Dialogue is a condition of information exchange betwe-en the user and the system it is a mechanism of recipro-cal self-disclosure where the user communicates themself and their preferences with control and inputs of the system while the system communicates its processes and rationa-le with transparency. These ideas are covered in chapter 3. The most evident result from the research is that this sy-stem of practices places its efficacy on a very delicate ba-lance of factors.

Transparency is the first to balance. Showing to much information can be annoying for the user or even lead to information overload. This overload is terrible, considering that recommender systems purpose is precisely to tackle the problem of information overload. Be transparent is es-sential but must be balanced to avoid the risk of overload.

7.1 - Dialogue is all about balance

Transparency

vs

information overload

Guideline 1 Balance transparency and information overload. Do not give too much information.

One of the most interesting out-takes from the experi-mentation to pursue this balance is contextualising infor-mation as much as possible, be precise in the moment and space in which the system communicates itself. Users are there for the content not for learning how the system works. Nearly 60% of the participants in the survey said that they guess the functioning of technology by its use against 26% that actively informs about it. Not too much information, in the right place at the right time. A good example is to leve-rage user action or an error to inform them about the parts of the system involved in that interaction.

Guideline 2 Be transparent when matters, be contextual. Information about the system should arrive in the right place at the right time.

Control

vs

cognitive overload

Another useful practice to deliver information about the system is to give some value to the user while informing. For example, returning some value by giving them insights into their personal information can reveal to users which data the system is collecting and using while creating for them a valuable moment of introspection and get their attention.

Guideline 3 Attach some value to information. Being transparent is very important, but it must be meaningful for the user in order to get their attention.

Guideline 4 Balance control and cognitive overload, make sure everything is as easy to use as possible.

Control is a powerful tool, it allows users to express themselves in many different ways, but trusting someone or something means to be able to delegate a task to them. Users engage a service because it is relieving them in some way and helps them doing something. Control has its coun-terpart to balance: too much control brings to cognitive overload. Digital services are getting very proactive at doing their job. Interfaces are optimised to be smooth, easy, cle-an and reduce any kind of load from visual to cognitive and create habits, making them as simple to use as possible. Adding any degree of control shifts some effort from the sy-stem towards the user that needs to make decisions and take actions, and over a certain level, the effort can become overwhelming.

Of course, the limit varies between different users, ba-sed on their skills, experience with the service and their un-derstanding (for which transparency can make a lot). Even the same user can have a different threshold in different situations. Adding features and control let users free to in-teract and guide their experience, but taking action and ma-

Page 87: Do you trust me? - POLITesi

159158

Part 3 - Output 7 - Dialoguing systems.

king an effort should always be an option, not mandatory. As already said, trust is about delegating. Users will never even have to trust a service or recommender system if they are required to do all by themselves.

Guideline 5 If the degree of control increase, it must be optional for the user to exploit it.

Consider that user controls affect how they can commu-nicate their intentions and preferences and that the system must use this information. Be sure to design control in such a way that collects information and signals that are relevant for the system and not ambiguous. Make it an effective way of communication makes the user feel that is being heard and that their effort is meaningful.

Guideline 6 Controls mediate information coming from the user. Design them to make this information relevant and unambiguous.

Dialogue is systemic. It is about systems and a concept introduced to design better recommender systems. It is a concept based on a system of transparency and control. It is a concept to describe a functional interaction between two or more entities, a system where elements influence each other. For this reason, dialogue must be reciprocal, the actors that take part in this interaction must have equal possibilities to express themselves, and the quality of infor-mation should be balanced and fair for both participants.

Dialogue is systemic

Guideline 7 Dialogue is reciprocal. The quality of information shared is comparable between the participants.

The system established by transparency and control should be balanced as well. Communication should be fair, and every participant should have their space for self-di-sclosure guaranteed. However, dialogue is dynamic and va-riable. Naturally, one of the two sides of the communication can retreat or invade respect to the other in certain situa-tions. In order to keep the conversation alive and restore the balance, the level of control and transparency should be flexible enough to react to changes in communication from the other and guarantee a sufficient exchange of in-formation.

Guideline 8 Dialogue is dynamic. Levels of transparency and control should be flexible enough to react to changes and still maintain a balance between the two.

Tiziana, one of the participants, said during the interview that there is nothing to do in case the person does not want to communicate apart to respect them even in this extre-me decision. Keep communicating while guaranteeing the space for the other to start participating whenever they feel ready, with maximum respect, is the secret of building a trustful relationship even when dialogue is scarce.

Guideline 9 Design dialogue to allow respectful communication. Make the system work even in extreme situations of “silence” or resistance while guaranteeing the freedom to join the conversation again.

Dialogue is a system of transparency and control. Al-though, to achieve dialogue is not enough to design good transparency and reasonable control into a recommender system. The two concepts must cooperate to generate small spaces or a complex system of conversation. A system whe-re the system transparency and the user control coexist and

Page 88: Do you trust me? - POLITesi

161160

Part 3 - Output 7 - Dialoguing systems.

are compatible with each other to establish a dialogue. A system where the two concepts become different tools be-longing to one or the other participant that allow them to interact with each other contextually. In the presence of the input from one side, the other always has a way to interact, starting a conversation and sharing its opinion regarding whatever is happening.

Guideline 10 Transparency and control must create a system and collaborate to create a space for conversation. Dialogue is more than the sum of its parts.

Recommender systems are products created for infor-mation retrieval and personalisation. The output of their process is, in fact, recommendations. Although, as their name says, they are systems, made up of different actors that interact and contribute to a common goal. Recommen-dations are only one component of the mechanism. If we consider the framework from He et al. (see 6.3) the entirety of the system includes the user and their data, the context, the engine (algorithm) that process all the information, the medium used to create recommendations and finally re-commendations themselves. All of these components are responsible for delivering the best recommendations pos-sible. Most of all, they interact with each other in a complex structure of interconnections. Focus all the effort on one node of the system only, without considering the impact that can have on the system as a whole, can quickly end up in “Frankenstein” products or bottlenecks that suffoca-te the experience nullifying all the excellent work made on that specific node.

Guideline 11 Design for recommender systems holistically, focusing on all the components and their interactions.

7.2 - There is more than just recommendations

User dataUser personal data is the strongest point of contact between the user and the system. Data collected from the user and information associated with them is the way they are represented inside the system. If the user understands the relationship between this representation and recom-mendations, is more likely that they would give more pre-cise information, spend effort in revising it and feel that they are not giving this information for free. This space can become a crucial tool for understanding user needs and expectations Share with users their data can reinforce their vision of themself and let them feel heard while giving new insights on their profile can even let them feel understood or return them some of the value of that information. Allow them to control and revise this data opens up the possibili-ty to disclose more of themselves and create more precise profiles, but most of all they can express their uniqueness and avoid to be influenced by biased evaluation from the sy-stem (see 2.5)

Guideline 12 Establish a dialogue about user data. Make sure users receive some value from their personal information, and understand the importance of sharing their data.

Personal data is something a user can be very jealous of. Privacy is essential for everybody (see 2.5). While being transparent on the collection and use of personal data can comfort a user that their data is used honestly, let them have control on it make them feel sure that they are still in possession and control of what belongs to them, increasing trust and future intention of giving more data.

Guideline 13 User data involves privacy. Establish a dialogue about it can make the user feel safer and foster trust.

Page 89: Do you trust me? - POLITesi

163162

Part 3 - Output 7 - Dialoguing systems.

Context is the hardest component to manage. It invol-ves short-term considerations, in a system of values like is trust, based on long-term goals. Context is dynamic chan-ges fast and is hard to design ways to collect meaningful information about it due to its variability. By opening a dia-logue with the user is possible to make them evaluate the situation and optimise the opportunities of this kind of in-formation. Make sure to leave a channel always open for the user to communicate their context.

Context

Medium

Guideline 14 Context is inconsistent and hard to evaluate due to the qualitative nature of the information. Involve the user in this evaluation with dialogue.

It is hard, but it can give the most value to the user when leveraged in the correct way. Being able to understand the context of the user and involve it in the process of genera-ting recommendations can give an incredible boost to the relevance of recommendations. The variability introduced by the context can increase diversity and generate serendi-pity, breaking filter bubbles (see 2.5).

Guideline 15 Context can deliver hyper-relevant recommendations. Use it to deliver diversity with efficacy and break filter bubbles.

Medium is all the information about the content gene-rated by the recommender engine as an intermediate level for calculating recommendations. Medium information can be a list of similar users in case of collaborative filtering or a list of possible user interests rather than a structured system of metadata for content-based filtering or others. Being transparent about this kind of information while al-lowing users to tinker with it is the best way a user has to guess the rationale that runs the recommender systems.

Medium information involves the qualities of the recom-mended content. Take into consideration the fact that the content, and not the recommendation itself, is the value that the user is searching. Focus interaction around this node can place the attention of the user over the content and strengthen the perception of that value.

Guideline 16 Medium information is a reflection of the system processes. Establish dialogue about this information can teach the user how the system works.

Guideline 17 Medium is where value is. Establish a dialogue to show this value to the user.

Recommendations are the output of the system, the re-sult of all these processes. Making it as explicit as possi-ble can make the user aware of the effort spent over this process and be a cause for appreciation. Let the user have some degree of control on this output can be the best way to collect feedback and evaluate the efficacy of all the system.

Recommendation

Guideline 18 Establish a dialogue to evaluate recommendations. Their quality also reflects the efficacy of the system as a whole.

Recommendations are personalisation technology and should reflect the needs and preferences of the user. Al-though, recommendations’ display, their integration with the service and the attention given to them can tell a lot of vision and intentions of the service. Make sure not to sacri-fice business goals and compromise personalised recom-mendations with them. With transparency, recommenda-

Page 90: Do you trust me? - POLITesi

165164

Part 3 - Output 7 - Dialoguing systems.

tions can become a powerful tool for communicating the character of the brand, the intentions of the service and the requirements of the business, making them the ultimate device for communicating with the users.

Guideline 19 Recommendations can be the ultimate tool for dialogue by compromising user needs with business goals.

The first intention with the elements of interaction col-lected during the research (see 6.3) was to understand whi-ch of them were adequate for dialogue and trust and which were not. Although, during the research, this changed a lot. Approaching the next phases, and exploiting both the Men-tal Model Diagram and those elements, designing and pro-totyping and evaluating them with the users left a valuable lesson. There are not elements of interaction better than others or something like “elements of interaction for trust”. There are instead design patterns, good design approaches (summarised in this chapter in the form of guidelines) that able to drive the design of recommender systems towards trust by using dialogue. The elements of interaction, inste-ad, are just bricks. It does not make sense to make a se-lection of them based only on the experiment of this thesis, or based on any other. The index can only be extended with always new solution coming from designers or development of technologies. The only selection that matters is the one made to fit a specific situation. The Mental Model Diagram alignment allows selecting elements to meet users’ mental models or their needs and expectations. Select the right set of elements of interaction allows empowering the content we are dealing with, support the technology and algorithms in use, and communicate the values of the brand and the service. Each situation will benefit from a particular desi-gn. In the end, the best value of the elements collected is their flexibility, while the best design patterns are not ba-sed on a selection of them, but on principles that guided

7.3 - To each their own assetGuideline 20 Dialogue, as well as recommendations, is personal. Design for flexibility is an effective way to deliver experiences that fit every unique situation.

the research from the beginning. Flexibility is the best tool for good design and users are different, have different ne-eds, different experiences and different contexts. Basili-co (2019) included in the future trends for this technology “user experience personalisation” with the idea of making unique and personalised not only the content delivered by a recommender system but also the experience of recom-mendation. Have a flexible set of components for design and reliable principles to select them seems one valuable solution to continue experimenting and develop, making it a structured approach for trustworthy experiences in the fu-ture of recommender systems.

Page 91: Do you trust me? - POLITesi

Conclusions

Page 92: Do you trust me? - POLITesi

168 169

Conclusions

The experimentation was a complete success, at least based on the results achieved with the users. User participation has been very satisfactory and gene-

rated so much qualitative information, full of insight pro-bably many more than the ones highlighted during this re-search. The analysis of elements of interaction produced a reliable index of references, that can always be extended with new solutions or opportunities brought by upcoming technologies and become a valid resource for designing such systems. CoDesign involving different users in envisio-ning new solutions for something that was for them simul-taneously familiar and unknown was the hardest part, but give shape to these ideas was also one of the most exciting thanks to user’s enthusiasm in their feedback.

In the end, all the parameters of evaluation received good feedback and the process of research is loaded with insights. The most relevant for conclusions are synthesised in chapter 7.

The concept of dialogue was based on a solid ground of literature, but even with a designerly approach revealed as a very interesting concept to explore and this little proof of its potential could be a reason for further investigations. The context seems mature and the topic very contemporary to the ongoing discussion. In fact, during the very final stages of this one year research, a lot of the solutions, ideas, and concepts discussed or developed started to appear in some renowned digital services and products from Netflix, Goo-gle, Facebook, Spotify and others. Big, international busi-nesses are revealing an interest in developing features and interactions that align with the concepts explored during this research, and demonstrate once again the importance of investigating them and develop solid and reliable tools to design and develop experiences of this kind. The research exposed several topics about the relationship with tech-nology. The concept of dialogue could apply to several do-mains and be the key to tackle issues of distrust and biases towards technological progress. Helping people accepting new technologies and never miss opportunities. This rese-

Sum up

The potential of

dialogue

Page 93: Do you trust me? - POLITesi

171170

Conclusion

arch proves that design as a discipline do have the ability and tools to investigate and contribute to this achievement, even if some tools and practices could improve. With the progress of information technology, even more “black-box” technologies will develop and the issue of trust will be cru-cial. Design interactions based on dialogue could be a path to follow to establish relationships of trustful collaboration between humans and the fast, ever-changing ecosystem of technology.

Many things can be further investigated. First, conside-ring the strong focus on user experience and the interest received by similar solutions from digital services, it can be further explored the relationship and compromises betwe-en the main concepts and the intricate system of stakehol-ders and business goals of the industry. So, first of all, the influence of value-aware and multi-stakeholder recom-mender systems (see 2.5) could have a huge impact on the application of dialogue and should be further explored. For example, even if trust fosters loyalty and has a very posi-tive impact on the business side, factors like transparen-cy could clash against the preservation of important trade secrets that have immeasurable value for companies into competing with others.

Another interesting factor to introduce should be the upper, technical part of the model for trust drawn at the beginning of chapter 3. Accuracy and other qualities of the recommender systems algorithm affect the quality of the user experience as well as the design of the user interface and interactions. Whatsmore as shown in the ResQue mo-del of evaluation some of these algorithmic qualities direct-ly affect trust. For this reason, the experimentation could be run on a fully functioning recommender system and test user experience with the observation of the use of this pro-duct, involving personalised content that is relevant for the user, rather than the conceptual prototypes discussed by the focus group in this research. This practice would intro-duce a more complete environment and investigate the user experience of dialogue considering a greater complexity of influences.

At this point, to widen up the application of dialogue approach, the experimentation can be transposed to other fields of application of recommender systems, investigate contexts that involve high-risk choices and understand how

this can affect the achievement of trust in the system. And last, once dialogue is structured as a valid approa-

ch for recommender systems, it can be researched how to transpose this same approach towards other technologies, to pursue the collaborative relationship between humans and technology envisioned in the introduction.

The design methodology followed in this experimenta-tion achieved great results, and some of the tools used or developed, like the Mental Model Diagram and the index of elements of interaction, revealed unexpected flexibility and reliability becoming more than tools for analysis in the gui-dance they provided even during the design phase. It would be very useful, as the concept is delineated by further inve-stigating the topics in the aforementioned ways, to develop a structured design tool, to guide the design of dialoguing recommender systems (or technologies in general) through a valid approach and methodology, when similar goals are expected. Tools like the Mental Model Diagram are for sure a powerful and flexible way to organise and store data but they are very personal and relate to the mental model and interpretation of the designer that creates them. Adding se-veral layers of information over the standard layout of the Mental Model Diagram, transformed it in a system of infor-mation, improved navigation of the map and exponentially increased the opportunity for insights. It revealed as a very successful approach for managing data, information and to drive design decisions and research. In order to disclo-se more potential out of this tool and this way of using the information in a data-driven design fashion a further, for-mal and structural redesign of the tool itself is necessary. Mental Model Diagrams are very complex maps, with a very high cost in term of effort to be compiled and very hard to be used as a collaboration and/or communication tool. At the current stage, they always need the designer to extract valuable information and insights, and/or synthesise know-ledge to be reused by collaborators or to be shown to other people that wouldn’t be able to navigate nor to understand the information of the frequently enormous Mental Model Diagrams.

Carry out this research has been a huge personal accom-plishment, paired with great satisfaction. These topics were in my mind since the times of the bachelor and I couldn’t face them during the bachelor degree thesis mainly becau-

Personal

considerations

Further

developments

Page 94: Do you trust me? - POLITesi

173172

Conclusion

se I felt too immature at the time and this research would never have fit a less important work of only four months. Being able to bring these topics out of the shelf again and face them, has been a hard challenge anyway, but it is the best accomplishment not only to be able to get this “itch” out of my system but also to be able to demonstrate myself my personal and professional growth during the last acade-mic path. It really meant a lot to me to achieve this result, apart from the pragmatic goal of graduating.

This work, over the last year, also taught me so much more than the things written on this document. First of all, how to write and structure academic research. This may be something easy to give for granted at the beginning. Althou-gh, after six years spent doing design jobs, following only design processes, stick with academic methodologies, stu-dy stacks of academic papers, learn how to structure rese-arch and how to document it was one of the hardest tasks of the process. It needed a completely different mindset, the training of long-forgotten skills, and learning completely new procedures, that placed several obstacles during the way. Both practical and motivational. Second, going through the process alone, without sharing parts of the process with others, showed me my professional flaws and strengths, forcing me to tackle or deceive the firsts and leverage the seconds, in order to proceed and achieve the best results possible. Third, it demonstrated that I can manage the de-sign process by myself even in a very theoretical and com-plex environment, by applying the tools of methodologies of design acquired during my education and experiences. Providing a bit of confidence in approaching the following professional challenges. Last, as works for every challenge in life, being able to overcome this work by myself, with full responsibility, overcoming difficulties as they arose, made me grow as a person beyond that as a designer, making a part of my unique life experience.The end

Page 95: Do you trust me? - POLITesi

Bibliography

Page 96: Do you trust me? - POLITesi

176 177

Bibliography

Abdollahpouri, H., Burke, R., 2019. Multi-stakeholder Recommendation

and its Connection to Multi-sided Fairness. arXiv:1907.13158 [cs].

Adamczak, J., Leyson, G.-P., Knees, P., Deldjoo, Y., Moghaddam, F.B., Nei-

dhardt, J., Wörndl, W., Monreal, P., 2019. Session-Based Hotel Recom-

mendations: Challenges and Future Directions. arXiv:1908.00071 [cs].

Adomavicius, G., Bockstedt, J., Curley, S., Zhang, J., 2019. Reducing Re-

commender Systems Biases: An Investigation of Rating Display Desi-

gns (SSRN Scholarly Paper No. ID 3346686). Social Science Research

Network, Rochester, NY.

Afify, Y.M., Moawad, I.F., Badr, N.L., Tolba, M.F., 2017. A personalized re-

commender system for SaaS services. Concurrency and Computation:

Practice and Experience 29, e3877. https://doi.org/10.1002/cpe.3877

Afolabi, A.O., Toivanen, P., 2019. Improving the design of a recommen-

dation system using evaluation criteria and metrics as a guide. Jour-

nal of Systems and Information Technology 21, 304–324. https://doi.

org/10.1108/JSIT-01-2019-0019

Afridi, A.H., 2019. Transparency for Beyond-Accuracy Experiences: A Novel

User Interface for Recommender Systems. Procedia Computer Scien-

ce, The 10th International Conference on Ambient Systems, Networks

and Technologies (ANT 2019) / The 2nd International Conference on

Emerging Data and Industry 4.0 (EDI40 2019) / Affiliated Workshops

151, 335–344. https://doi.org/10.1016/j.procs.2019.04.047

Alphabet, 2020. Google AdSettings [WWW Document]. URL https://adsset-

tings.google.com/ (accessed 1.23.20).

Alyari, F., Jafari Navimipour, N., 2018. Recommender systems: A sy-

stematic review of the state of the art literature and suggestions

for future research. Kybernetes 47, 985–1017. https://doi.or-

g/10.1108/K-06-2017-0196

Åman, P., Liikkanen, L.A., 2010. A Survey of Music Recommendation Aids 4.

Amer, K., Noujaim, J., 2019. The Great Hack. Netflix.

Arcand, M., Nantel, J., Arles-Dufour, M., Vincent, A., 2007. The impact of

reading a web site’s privacy statement on perceived control over pri-

vacy and perceived trust. Online Information Review 31, 661–681. ht-

tps://doi.org/10.1108/14684520710832342

Aupers, S., 2012. ‘Trust no one’: Modernization, paranoia and conspiracy

culture. European Journal of Communication 27, 22–34. https://doi.

org/10.1177/0267323111433566

Awad, N.F., Krishnan, M.S., 2006. The Personalization Privacy Paradox: An

Page 97: Do you trust me? - POLITesi

179178

Bibliography

Empirical Evaluation of Information Transparency and the Willingness

to be Profiled Online for Personalization. MIS Quarterly 30, 13–28. ht-

tps://doi.org/10.2307/25148715

Basilico, J., 2019. Recent Trends in Personalization: A Netflix Perspective.

Basilico, J., 2018. Artwork Personalization at Netflix.

Basilico, J., 2016. Past, Present & Future of Recommender Systems: An

Industry Perspective.

Beckman, F., 2019. Dark New World. American Book Review 40, 6–7. ht-

tps://doi.org/10.1353/abr.2019.0104

Beleffi, C., 2019. Disfluency by design in the infosphere era : encouraging

careful decisions (Laurea Magistrale / Specialistica). Politecnico di Mi-

lano.

Bendet, N., 2020. Is Spotify’s random play button really random? [WWW

Document]. Medium. URL https://uxdesign.cc/randomly-not-ran-

dom-2fd53536513c (accessed 2.10.20).

Bennett, J., Lanning, S., 2007. The netflix prize, in: Proceedings of KDD Cup

and Workshop. New York, NY, USA., p. 35.

Berdichevsky, D., Neuenschwander, E., 1999. Toward an ethics of

persuasive technology. Commun. ACM 42, 51–58. https://doi.

org/10.1145/301353.301410

Berman, A.E., 2016. Bridging the Mental Healthcare Gap With Artificial Intel-

ligence. Singularity Hub. URL https://singularityhub.com/2016/10/10/

bridging-the-mental-healthcare-gap-with-artificial-intelligence/ (ac-

cessed 11.26.19).

Bertolo, M., Mariani, I., 2014. Game design: gioco e giocare tra teoria e pro-

getto. Pearson, Milano; Torino.

Borchers, A., Herlocker, J., Konstan, J., Reidl, J., 1998. Ganging up

on information overload. Computer 31, 106–108. https://doi.

org/10.1109/2.666847

Bostandjiev, S., O’Donovan, J., Höllerer, T., 2013. LinkedVis: exploring social

and semantic career recommendations, in: Proceedings of the 2013 In-

ternational Conference on Intelligent User Interfaces - IUI ’13. Presen-

ted at the the 2013 international conference, ACM Press, Santa Moni-

ca, California, USA, p. 107. https://doi.org/10.1145/2449396.2449412

Bostandjiev, S., O’Donovan, J., Höllerer, T., 2012. TasteWeights: A Visual

Interactive Hybrid Recommender System, in: Proceedings of the Six-

th ACM Conference on Recommender Systems, RecSys ’12. ACM, New

York, NY, USA, pp. 35–42. https://doi.org/10.1145/2365952.2365964

Bozdag, V.E., 2015. Bursting the Filter Bubble: Democracy, Design, and

Ethics (Doctoral Thesis).

Bridge, D., 2002. Towards Conversational Recommender Systems: A Dialo-

gue Grammar Approach 14.

Brokerhof, I.M., Bal, P.M., Jansen, P.G.W., Solinger, O.N., 2018. FICTIONAL

NARRATIVES AND IDENTITY CHANGE: THREE PATHWAYS THROUGH

WHICH STORIES INFLUENCE THE DIALOGICAL SELF 24.

Bruns, S., Valdez, A.C., Greven, C., Ziefle, M., Schroeder, U., 2015. What

Should I Read Next? A Personalized Visual Publication Recommender

System, in: Yamamoto, S. (Ed.), Human Interface and the Management

of Information. Information and Knowledge in Context, Lecture Notes

in Computer Science. Springer International Publishing, Cham, pp. 89–

100. https://doi.org/10.1007/978-3-319-20618-9_9

Buchanan, R., 1992. Wicked Problems in Design Thinking. Design Issues 8,

5. https://doi.org/10.2307/1511637

Buckland, M.K., 2017. Information and society, The MIT Press essential

knowledge series. The MIT Press, Cambridge, Massachusetts.

Budiu, R., 2018. Can Users Control and Understand a UI Driven by Machi-

ne Learning? [WWW Document]. Nielsen Norman Group. URL https://

www.nngroup.com/articles/machine-learning-ux/ (accessed 2.28.20).

Burke, R., 2017. Multisided Fairness for Recommendation. ar-

Xiv:1707.00093 [cs].

Burke, R., 2002. Hybrid Recommender Systems: Survey and Experiments

40.

Burke, R., Abdollahpouri, H., Malthouse, E.C., Thai, K.P., Zhang, Y., 2019.

Recommendation in multistakeholder environments. Presented at the

RecSys 2019 - 13th ACM Conference on Recommender Systems, pp.

566–567. https://doi.org/10.1145/3298689.3346973

Burke, R.D., Abdollahpouri, H., Mobasher, B., Gupta, T., 2016. Towards Mul-

ti-Stakeholder Utility Evaluation of Recommender Systems, in: UMAP.

Burke, R.D., Hammond, K.J., Yound, B.C., 1997. The FindMe appro-

ach to assisted browsing. IEEE Expert 12, 32–40. https://doi.

org/10.1109/64.608186

Burr, C., Cristianini, N., Ladyman, J., 2018. An Analysis of the Interaction

Between Intelligent Software Agents and Human Users. Minds & Ma-

chines 28, 735–774. https://doi.org/10.1007/s11023-018-9479-0

Calero Valdez, A., Ziefle, M., 2019. The users’ perspective on the pri-

vacy-utility trade-offs in health recommender systems. International

Journal of Human-Computer Studies, Advances in Computer-Human

Interaction for Recommender Systems 121, 108–121. https://doi.or-

g/10.1016/j.ijhcs.2018.04.003

Calero Valdez, A., Ziefle, M., Verbert, K., 2016. HCI for Recommender Sy-

stems: the Past, the Present and the Future, in: Proceedings of the

10th ACM Conference on Recommender Systems - RecSys ’16. Presen-

ted at the the 10th ACM Conference, ACM Press, Boston, Massachu-

setts, USA, pp. 123–126. https://doi.org/10.1145/2959100.2959158

Çano, E., Morisio, M., 2017. Hybrid Recommender Systems: A Systematic

Literature Review. IDA 21, 1487–1524. https://doi.org/10.3233/IDA-

Page 98: Do you trust me? - POLITesi

181180

Bibliography

163209

Cantador, I., Fernández-Tobías, I., Bellogín, A., 2013. Relating personality

types with user preferences in multiple entertainment domains.

Carroll, D., 2015. You say you ignore the banners but they never igno-

re you. Digital Content Next. URL https://digitalcontentnext.org/

blog/2015/09/28/you-say-you-ignore-the-banners-but-they-never-i-

gnore-you/ (accessed 12.3.19).

Chelliah, M., Sarkar, S., Zheng, Y., Kakkar, V., 2019. Recommenda-

tion for multi-stakeholders and through neural review mining. Pre-

sented at the International Conference on Information and Know-

ledge Management, Proceedings, pp. 2979–2981. https://doi.

org/10.1145/3357384.3360321

Chen, J., 2019. Analysis Paralysis [WWW Document]. Investopedia. URL ht-

tps://www.investopedia.com/terms/a/analysisparalysis.asp (acces-

sed 11.28.19).

Chen, L., Pu, P., 2012. Critiquing-based recommenders: survey and emer-

ging trends. User Model User-Adap Inter 22, 125–150. https://doi.

org/10.1007/s11257-011-9108-6

Chen, W., Quan-Haase, A., 2020. Big Data Ethics and Politics: Toward New

Understandings. Social Science Computer Review 38, 3–9. https://doi.

org/10.1177/0894439318810734

Chen, Y., Ma, X., Cerezo, A., Pu, P., 2014. Empatheticons: Designing Emo-

tion Awareness Tools for Group Recommenders, in: Proceedings of

the XV International Conference on Human Computer Interaction -

Interacción ’14. Presented at the the XV International Conference,

ACM Press, Puerto de la Cruz, Tenerife, Spain, pp. 1–8. https://doi.

org/10.1145/2662253.2662269

Chen, Y., Pu, P., 2012. CoFeel: Using Emotions for Social Interaction in

Group Recommender Systems 8.

Chi, E.H., 2009. Information Seeking Can Be Social. Computer 42, 42–46.

https://doi.org/10.1109/MC.2009.87

Christensen, I.A., Schiaffino, S., 2011. Entertainment recommen-

der systems for group of users. Expert Systems with Applications

S0957417411007482. https://doi.org/10.1016/j.eswa.2011.04.221

Chung, S., n.d. info - Sougwen Chung. URL https://sougwen.com/info (ac-

cessed 4.19.20).

Cialdini, R.B., 2009. Influence: science and practice. HarperCollins ebooks,

Pymble, NSW; New York, NY.

Cianciutti, J., 2011. John Ciancutti’s answer to Is there a better alternative

to the 5-star rating system? - Quora [WWW Document]. URL https://

www.quora.com/Is-there-a-better-alternative-to-the-5-star-rating-

system/answer/John-Ciancutti (accessed 1.15.20).

Cisco, 2019. Cisco Visual Networking Index: Forecast and Trends,

2017–2022 White Paper [WWW Document]. cisco.com. URL https://

www.cisco.com/c/en/us/solutions/collateral/service-provider/vi-

sual-networking-index-vni/white-paper-c11-741490.html (accessed

11.26.19).

Covington, P., Adams, J., Sargin, E., 2016. Deep Neural Networks for You-

Tube Recommendations, in: Proceedings of the 10th ACM Conferen-

ce on Recommender Systems - RecSys ’16. Presented at the the 10th

ACM Conference, ACM Press, Boston, Massachusetts, USA, pp. 191–

198. https://doi.org/10.1145/2959100.2959190

Cramer, H., Evers, V., Ramlal, S., van Someren, M., Rutledge, L., Stash, N.,

Aroyo, L., Wielinga, B., 2008. The effects of transparency on trust in and

acceptance of a content-based art recommender. User Model User-A-

dap Inter 18, 455–496. https://doi.org/10.1007/s11257-008-9051-3

Cromwell, H.C., Mears, R.P., Wan, L., Boutros, N.N., 2008. Sensory gating:

a translational effort from basic to clinical science. Clin EEG Neurosci

39, 69–72. https://doi.org/10.1177/155005940803900209

Daniel, A., Flew, T., 2010. The Guardian reportage of the UK MP expenses

scandal: A case study of computational journalism.

Dara, S., Chowdary, C.R., Kumar, C., 2019. A survey on group recommender

systems. J Intell Inf Syst. https://doi.org/10.1007/s10844-018-0542-3

Davidson, J., Livingston, B., Sampath, D., Liebald, B., Liu, J., Nandy, P., Van

Vleet, T., Gargi, U., Gupta, S., He, Y., Lambert, M., 2010. The YouTube vi-

deo recommendation system, in: Proceedings of the Fourth ACM Con-

ference on Recommender Systems - RecSys ’10. Presented at the the

fourth ACM conference, ACM Press, Barcelona, Spain, p. 293. https://

doi.org/10.1145/1864708.1864770

De Mauro, A., Greco, M., Grimaldi, M., 2016. A formal definition of Big Data

based on its essential features. Library Review 65, 122–135. https://

doi.org/10.1108/LR-06-2015-0061

De Vries, K., 2010. Identity, profiling algorithms and a world of ambient

intelligence. Ethics Inf Technol 12, 71–85. https://doi.org/10.1007/

s10676-009-9215-9

Deci, E.L., 2004. Intrinsic Motivation and Self-Determination, in: En-

cyclopedia of Applied Psychology. Elsevier, pp. 437–448. https://doi.

org/10.1016/B0-12-657410-3/00689-9

Design for Trust [WWW Document], 2020. . Design for Trust. URL https://

dft.sri.com/ (accessed 2.17.20).

Dinev, T., Hart, P., 2005. Internet Privacy Concerns and Social Aware-

ness as Determinants of Intention to Transact. International Journal

of Electronic Commerce 10, 7–29. https://doi.org/10.2753/JEC1086-

4415100201

Donkers, T., Loepp, B., Ziegler, J., 2016. Tag-Enhanced Collaborative Fil-

tering for Increasing Transparency and Interactive Control, in: Pro-

Page 99: Do you trust me? - POLITesi

183182

Bibliography

ceedings of the 2016 Conference on User Modeling Adaptation and

Personalization - UMAP ’16. Presented at the the 2016 Conference,

ACM Press, Halifax, Nova Scotia, Canada, pp. 169–173. https://doi.

org/10.1145/2930238.2930287

dschool_bootleg_deck_2018_final_sm+(2).pdf, n.d.

Dubberly, H., Pangaro, P., Haque, U., 2009. What is interaction?

are there different types? interactions 16, 69. https://doi.

org/10.1145/1456202.1456220

Dutton, W.H., Shepherd, A., 2006. Trust in the Internet as an experience te-

chnology. Information, Communication & Society 9, 433–451. https://

doi.org/10.1080/13691180600858606

Eppler, M.J., Mengis, J., 2004. The Concept of Information Overload: A Re-

view of Literature from Organization Science, Accounting, Marketing,

MIS, and Related Disciplines. The Information Society 20, 325–344.

https://doi.org/10.1080/01972240490507974

Farrell, R.G., Danis, C., Ramakrishnan, S., Amini, R., 2012. Technologies for

Lifestyle Change ( LIFESTYLE 2012 ) and First International Workshop

on Interfaces for Recommender.

Ferwerda, B., 2016. Improving the User Experience of Music Recommender

Systems Through Personality and Cultural Information. Johannes Ke-

pler University, Linz (AT).

Fisher, A., Margolis, J., 2003. Unlocking the clubhouse: women in compu-

ting. SIGCSE Bull. 35, .23. https://doi.org/10.1145/792548.611896

Frayling, C., 1993. Research in Art and Design. Royal College of Art Resear-

ch Papers, Royal College of Art Research Papers 1, 1–5.

Freedy, A., DeVisser, E., Weltman, G., Coeyman, N., 2007. Measurement of

trust in human-robot collaboration, in: 2007 International Symposium

on Collaborative Technologies and Systems. Presented at the 2007 In-

ternational Symposium on Collaborative Technologies and Systems,

pp. 106–114. https://doi.org/10.1109/CTS.2007.4621745

Friedman, K., 2008. Research into, by and for design. Journal of Visual Art

Practice 7, 153–160. https://doi.org/10.1386/jvap.7.2.153_1

Friedrich, G., Zanker, M., 2011. A Taxonomy for Generating Explana-

tions in Recommender Systems. AI Magazine 32, 90–98. https://doi.

org/10.1609/aimag.v32i3.2365

Friis Dam, R., Siang Teo, Y., 2020a. 5 Stages in the Design Thinking Process

[WWW Document]. The Interaction Design Foundation. URL https://

www.interaction-design.org/literature/article/5-stages-in-the-desi-

gn-thinking-process (accessed 3.5.20).

Friis Dam, R., Siang Teo, Y., 2020b. Personas – A Simple Introduction [WWW

Document]. The Interaction Design Foundation. URL https://www.in-

teraction-design.org/literature/article/personas-why-and-how-you-

should-use-them (accessed 3.10.20).

Friis Dam, R., Siang Teo, Y., 2020c. What is Design Thinking and Why Is It So

Popular? [WWW Document]. The Interaction Design Foundation. URL

https://www.interaction-design.org/literature/article/what-is-desi-

gn-thinking-and-why-is-it-so-popular (accessed 3.5.20).

Gardikiotis, A., Baltzis, A., 2012. ‘Rock music for myself and justice to the

world!’: Musical identity, values, and music preferences. Psychology of

Music 40, 143–163. https://doi.org/10.1177/0305735610386836

Gauch, S., Speretta, M., Chandramouli, A., Micarelli, A., 2007. User Pro-

files for Personalized Information Access, in: Brusilovsky, P., Kobsa,

A., Nejdl, W. (Eds.), The Adaptive Web: Methods and Strategies of Web

Personalization, Lecture Notes in Computer Science. Springer, Berlin,

Heidelberg, pp. 54–89. https://doi.org/10.1007/978-3-540-72079-9_2

Gebbia, J., 2016. How Airbnb designs for trust.

Giddens, A., 1997. Modernity and self-identity: self and society in the Late

Modern Age, 1. publ. in the U.S.A. ed. Stanford Univ. Press, Stanford,

Calif.

Goker, M.H., Langley, P., Thompson, C.A., 2004. A Personalized System

for Conversational Recommendations. jair 21, 393–428. https://doi.

org/10.1613/jair.1318

Goldberg, D., Nichols, D., Oki, B.M., Terry, D., 1992. Using collaborative

filtering to weave an information tapestry. Commun. ACM 35, 61–70.

https://doi.org/10.1145/138859.138867

Goldberg, R.A., 2012. Enemies within: the culture of conspiracy in modern

america. Yale University Press, Place of publication not identified.

Gou, L., You, F., Guo, J., Wu, L., Zhang, X. (Luke), 2011. SFViz: interest-ba-

sed friends exploration and recommendation in social networks, in:

Proceedings of the 2011 Visual Information Communication - Inter-

national Symposium on - VINCI ’11. Presented at the the 2011 Visual

Information Communication - International Symposium, ACM Press,

Hong Kong, China, pp. 1–10. https://doi.org/10.1145/2016656.2016671

Grabner-Kräuter, S., Kaluscha, E.A., 2003. Empirical research in on-line

trust: a review and critical assessment. International Journal of Hu-

man-Computer Studies 58, 783–812. https://doi.org/10.1016/S1071-

5819(03)00043-0

Grasch, P., Felfernig, A., Reinfrank, F., 2013. ReComment: towards criti-

quing-based recommendation with speech interaction, in: Proceedin-

gs of the 7th ACM Conference on Recommender Systems - RecSys ’13.

Presented at the the 7th ACM conference, ACM Press, Hong Kong, Chi-

na, pp. 157–164. https://doi.org/10.1145/2507157.2507161

Gurung, A., Luo, X., Raja, M.K., 2008. An Empirical Investigation on Custo-

mer’s Privacy Perceptions, Trust and Security Awareness in E-com-

merce Environment. Journal of Information Privacy and Security 4,

42–60. https://doi.org/10.1080/2333696X.2008.10855833

Page 100: Do you trust me? - POLITesi

185184

Bibliography

Harambam, J., Bountouridis, D., Makhortykh, M., van Hoboken, J., 2019.

Designing for the better by taking users into account: a qualitati-

ve evaluation of user control mechanisms in (news) recommender

systems, in: Proceedings of the 13th ACM Conference on Recom-

mender Systems - RecSys ’19. Presented at the the 13th ACM Con-

ference, ACM Press, Copenhagen, Denmark, pp. 69–77. https://doi.

org/10.1145/3298689.3347014

Harley, A., 2018a. Individualized Recommendations: Users’ Expectations &

Assumptions [WWW Document]. Nielsen Norman Group. URL https://

www.nngroup.com/articles/recommendation-expectations/ (acces-

sed 12.28.19).

Harley, A., 2018b. UX Guidelines for Recommended Content [WWW Docu-

ment]. Nielsen Norman Group. URL https://www.nngroup.com/arti-

cles/recommendation-guidelines/ (accessed 12.4.19).

Har-Paz, M.M., 2019. Make Me Think: Friction as a Function in User Expe-

rience [WWW Document]. Medium. URL https://modus.medium.com/

friction-as-a-function-in-user-experience-make-me-think-390e-

e17c6cf5 (accessed 1.29.20).

He, C., Parra, D., Verbert, K., 2016. Interactive recommender systems: A

survey of the state of the art and future research challenges and op-

portunities. Expert Systems with Applications 56, 9–27. https://doi.

org/10.1016/j.eswa.2016.02.013

Heller, S., Vienne, V., 2015. Becoming a graphic and digital designer: a gui-

de to careers in design, Fifth edition. ed. John Wiley & Sons, Inc, Ho-

boken, New Jersey.

Hemp, P., 2009. Death by information overload. Harv Bus Rev 87, 82–9, 121.

Herlocker, J.L., Konstan, J.A., Riedl, J., 2000. Explaining collaborative fil-

tering recommendations, in: Proceedings of the 2000 ACM Conference

on Computer Supported Cooperative Work - CSCW ’00. Presented at

the the 2000 ACM conference, ACM Press, Philadelphia, Pennsylvania,

United States, pp. 241–250. https://doi.org/10.1145/358916.358995

Hervas-Drane, A., 2008. Word of Mouth and Recommender Systems: A The-

ory of the Long Tail.

Hick, W.E., 1952. On the Rate of Gain of Information. Quarter-

ly Journal of Experimental Psychology 4, 11–26. https://doi.

org/10.1080/17470215208416600

Hilbert, M., 2012. Toward a synthesis of cognitive biases: How noisy in-

formation processing can bias human decision making. Psychological

Bulletin 138, 211–237. https://doi.org/10.1037/a0025940

Hill, C., Corbett, C., St. Rose, A., 2010. Why so few? women in science, tech-

nology, engineering, and mathematics. AAUW, Washington, D.C.

Hodgson, J., 2019. How does ‘Black Mirror’ represent contemporary

aspects of surveillance and the dystopian outcomes it can produce?

Critical Reflections: A Student Journal on Contemporary Sociological

Issues.

Hoppin, A., 2020. Trust, Not Data, as the New Oil: Designing for Data Tru-

sts. [WWW Document]. Medium. URL https://medium.com/@ahop-

pin/trust-not-data-as-the-new-oil-designing-for-data-trusts-2f-

f128a85528 (accessed 2.7.20).

Hu, R., 2010. Design and user issues in personality-based recommen-

der systems, in: Proceedings of the Fourth ACM Conference on Re-

commender Systems - RecSys ’10. Presented at the the fourth

ACM conference, ACM Press, Barcelona, Spain, p. 357. https://doi.

org/10.1145/1864708.1864790

Huang, W., Liu, B., Tang, H., 2019. Privacy Protection for Recommendation

System: A Survey. Presented at the Journal of Physics: Conference Se-

ries. https://doi.org/10.1088/1742-6596/1325/1/012087

Human factors in computing systems (Ed.), 1994. Human factors in com-

puting systems: CHI ; Conference. proceedings: Celebrating interde-

pendence ; 24 - 28 apr 1994. New York ACM.

IBM, 2011. Bringing smarter computer to big data.

IDEO, n.d. Design Thinking: History [WWW Document]. IDEO | Design Thin-

king. URL https://designthinking.ideo.com/history (accessed 3.6.20).

Isinkaye, F.O., Folajimi, Y.O., Ojokoh, B.A., 2015. Recommendation sy-

stems: Principles, methods and evaluation. Egyptian Informatics Jour-

nal 16, 261–273. https://doi.org/10.1016/j.eij.2015.06.005

Jameson, F., 2007. Postmodernism, or, The cultural logic of late capitali-

sm, 2. print. in pbk., [Nachdr.]. ed. Verso, London.

Jannach, D., Adomavicius, G., 2017. Price and Profit Awareness in Recom-

mender Systems 6.

Jannach, D., Naveed, S., Jugovac, M., 2017. User Control in Recommender

Systems: Overview and Interaction Challenges, in: Bridge, D., Stucken-

schmidt, H. (Eds.), E-Commerce and Web Technologies, Lecture Notes

in Business Information Processing. Springer International Publishing,

Cham, pp. 21–33. https://doi.org/10.1007/978-3-319-53676-7_2

Jin, Y., Tintarev, N., Verbert, K., 2018. Effects of personal characteristics

on music recommender systems with different levels of controllabili-

ty, in: Proceedings of the 12th ACM Conference on Recommender Sy-

stems - RecSys ’18. Presented at the the 12th ACM Conference, ACM

Press, Vancouver, British Columbia, Canada, pp. 13–21. https://doi.

org/10.1145/3240323.3240358

Johnson, S., 2011. Where good ideas come from: the natural history of in-

novation, 1. paperback ed. ed. Riverhead Books, New York.

Joinson, A., Reips, U.-D., Buchanan, T., Schofield, C.B.P., 2010. Privacy,

Trust, and Self-Disclosure Online. Human-Comp. Interaction 25, 1–24.

https://doi.org/10.1080/07370020903586662

Page 101: Do you trust me? - POLITesi

187186

Bibliography

Joyce, A., 2018. Microinteractions in User Experience [WWW Document].

Nielsen Norman Group. URL https://www.nngroup.com/articles/mi-

crointeractions/ (accessed 3.19.20).

Juan, W., Yue-Xin, L., Chun-Ying, W., 2019. Survey of Recommendation

Based on Collaborative Filtering. Presented at the Journal of Physics:

Conference Series. https://doi.org/10.1088/1742-6596/1314/1/012078

Jugovac, M., Jannach, D., 2017. Interacting with Recommenders—Over-

view and Research Directions. ACM Trans. Interact. Intell. Syst. 7,

1–46. https://doi.org/10.1145/3001837

Kalbach, J., 2016. Mapping experiences: a guide to creating value through

journeys, blueprints, and diagrams. O’Reilly, Beijing ; Boston.

Kani-Zabihi, E., Helmhout, M., 2012. Increasing Service Users’ Privacy

Awareness by Introducing On-Line Interactive Privacy Features, in:

Laud, P. (Ed.), Information Security Technology for Applications, Lectu-

re Notes in Computer Science. Springer, Berlin, Heidelberg, pp. 131–

148. https://doi.org/10.1007/978-3-642-29615-4_10

Kelly, K., 2014. What technology wants. Penguin Books, New York.

Khasawneh, O.Y., 2018a. Technophobia without boarders: The influen-

ce of technophobia and emotional intelligence on technology accep-

tance and the moderating influence of organizational climate. Com-

puters in Human Behavior 88, 210–218. https://doi.org/10.1016/j.

chb.2018.07.007

Khasawneh, O.Y., 2018b. Technophobia: Examining its hidden factors and

defining it. Technology in Society 54, 93–100. https://doi.org/10.1016/j.

techsoc.2018.03.008

Kiang, M.Y., 2000. Optimizing Human-Computer Interaction for the Electro-

nic Commerce Environment 1, 22.

Kinch, N., 2018. Data Trust, by Design: Principles, patterns and best practi-

ces (Part 1) [WWW Document]. Medium. URL https://medium.com/gre-

ater-than-experience-design/data-trust-by-design-principles-pat-

terns-and-best-practices-part-1-defffaac014b (accessed 2.17.20).

King, R., Churchill, E.F., Tan, C., 2017. Designing with Data : Improving the

User Experience with A/B Testing 369.

Kizilcec, R.F., 2016. How Much Information?: Effects of Transparency on

Trust in an Algorithmic Interface, in: Proceedings of the 2016 CHI Con-

ference on Human Factors in Computing Systems - CHI ’16. Presented

at the the 2016 CHI Conference, ACM Press, Santa Clara, California,

USA, pp. 2390–2395. https://doi.org/10.1145/2858036.2858402

Knight, P. (Ed.), 2002. Conspiracy nation: the politics of paranoia in po-

stwar America. New York University Press, New York.

Knight, W., 2017. Here’s how you might help Siri get smarter [WWW Do-

cument]. MIT Technology Review. URL https://www.technologyreview.

com/s/603613/siri-may-get-smarter-by-learning-from-its-mistakes/

(accessed 11.26.19).

Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H., Newell, C.,

2012. Explaining the user experience of recommender systems. User

Model User-Adap Inter 22, 441–504. https://doi.org/10.1007/s11257-

011-9118-4

Koene, A., Perez, E., Carter, C.J., Statache, R., Adolphs, S., O’Malley, C.,

Rodden, T., McAuley, D., 2015. Ethics of Personalized Information Fil-

tering, in: Tiropanis, T., Vakali, A., Sartori, L., Burnap, P. (Eds.), Internet

Science. Springer International Publishing, Cham, pp. 123–132. ht-

tps://doi.org/10.1007/978-3-319-18609-2_10

Komiak, Wang, Benbasat, 2004. Trust Building in Virtual Salespersons Ver-

sus in Human Salespersons: Similarities and Differences. e-Service

Journal 3, 49. https://doi.org/10.2979/esj.2004.3.3.49

Konstan, J.A., Riedl, J., 2012. Recommender systems: from algorithms to

user experience. User Model User-Adap Inter 22, 101–123. https://doi.

org/10.1007/s11257-011-9112-x

Koskinen, I., Zimmerman, J., Binder, T., Redstrom, J., Wensveen, S., 2013.

Design Research Through Practice: From the Lab, Field, and Showroom.

IEEE Trans. Profess. Commun. 56, 262–263. https://doi.org/10.1109/

TPC.2013.2274109

Krisjack, 2015. Recommender Systems in Netflix. A Practical Guide to Bu-

ilding Recommender Systems. URL https://buildingrecommenders.

wordpress.com/2015/11/18/recommender-systems-in-netflix/ (ac-

cessed 1.22.20).

Kulesza, T., Stumpf, S., Burnett, M., Kwan, I., 2012. Tell me more?: the ef-

fects of mental model soundness on personalizing an intelligent agent,

in: Proceedings of the 2012 ACM Annual Conference on Human Fac-

tors in Computing Systems - CHI ’12. Presented at the the 2012 ACM

annual conference, ACM Press, Austin, Texas, USA, p. 1. https://doi.

org/10.1145/2207676.2207678

Kulesza, T., Stumpf, S., Burnett, M., Wong, W.-K., Riche, Y., Moore, T.,

Oberst, I., Shinsel, A., McIntosh, K., 2010. Explanatory Debugging: Sup-

porting End-User Debugging of Machine-Learned Programs, in: 2010

IEEE Symposium on Visual Languages and Human-Centric Computing.

Presented at the 2010 IEEE Symposium on Visual Languages and Hu-

man-Centric Computing (VL/HCC), IEEE, Leganes, Madrid, Spain, pp.

41–48. https://doi.org/10.1109/VLHCC.2010.15

Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.-K., 2013.

Too much, too little, or just right? Ways explanations impact end users’

mental models, in: 2013 IEEE Symposium on Visual Languages and

Human Centric Computing. Presented at the 2013 IEEE Symposium on

Visual Languages and Human-Centric Computing (VL/HCC), IEEE, San

Jose, CA, USA, pp. 3–10. https://doi.org/10.1109/VLHCC.2013.6645235

Page 102: Do you trust me? - POLITesi

189188

Bibliography

Kumar, A., Singh, Y., 2019. A Survey on Popular Recommender Systems 6,

6.

Kumar, J., 2018. Breaking the filter-bubble: Using visualizations to encou-

rage blind-spots exploration (Master Thesis).

Lanier, J., 2013. Who owns the future?, First Simon & Schuster hardcover

edition. ed. Simon & Schuster, New York.

Larsen, G., Lawson, R., Todd, S., 2009. The consumption of music as

self-representation in social interaction. Australasian Marketing Jour-

nal (AMJ) 17, 16–26. https://doi.org/10.1016/j.ausmj.2009.01.006

LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature 521, 436–

444. https://doi.org/10.1038/nature14539

Lee, S., Choi, J., 2017. Enhancing user experience with conversational

agent for movie recommendation: Effects of self-disclosure and re-

ciprocity. International Journal of Human-Computer Studies 103, 95–

105. https://doi.org/10.1016/j.ijhcs.2017.02.005

Littman, M.L., 1994. Markov games as a framework for multi-agent rein-

forcement learning, in: Cohen, W.W., Hirsh, H. (Eds.), Machine Learning

Proceedings 1994. Morgan Kaufmann, San Francisco (CA), pp. 157–

163. https://doi.org/10.1016/B978-1-55860-335-6.50027-1

Lu, T., Pal, D., Pal, M., 2010. Contextual Multi-Armed Bandits 8.

Manouselis, N., Costopoulou, C., 2007. Analysis and Classification of Mul-

ti-Criteria Recommender Systems. World Wide Web 10, 415–441. ht-

tps://doi.org/10.1007/s11280-007-0019-8

Manyika, J., Chui, M., Bughin, J., Dobbs, R., Roxburgh, C., Hung Byers, A.,

2011. Big data: The next frontier for innovation, competition, and pro-

ductivity | McKinsey [WWW Document]. mckinsey.com. URL https://

www.mckinsey.com/business-functions/mckinsey-digital/our-insi-

ghts/big-data-the-next-frontier-for-innovation (accessed 11.26.19).

Marti, P., Megens, C., Hummels, C., 2016. Data-Enabled Design for So-

cial Change: Two Case Studies. Future Internet 8, 46. https://doi.

org/10.3390/fi8040046

Martínez-Córcoles, M., Teichmann, M., Murdvee, M., 2017. Assessing tech-

nophobia and technophilia: Development and validation of a question-

naire. Technology in Society 51, 183–188. https://doi.org/10.1016/j.

techsoc.2017.09.007

Mason, O.J., Stevenson, C., Freedman, F., 2014. Ever-present threats from

information technology: the Cyber-Paranoia and Fear Scale. Front.

Psychol. 5. https://doi.org/10.3389/fpsyg.2014.01298

Massa, P., Avesani, P., 2007. Trust-aware Recommender Systems, in:

Proceedings of the 2007 ACM Conference on Recommender Sy-

stems, RecSys ’07. ACM, New York, NY, USA, pp. 17–24. https://doi.

org/10.1145/1297231.1297235

Mayer, R.C., Davis, J.H., Schoorman, F.D., 1995. An Integrative Model of

Organizational Trust. The Academy of Management Review 20, 709. ht-

tps://doi.org/10.2307/258792

McInerney, J., Lacker, B., Hansen, S., Higley, K., Bouchard, H., Gruson,

A., Mehrotra, R., 2018a. Explore, exploit, and explain: personalizing

explainable recommendations with bandits, in: Proceedings of the

12th ACM Conference on Recommender Systems - RecSys ’18. Presen-

ted at the the 12th ACM Conference, ACM Press, Vancouver, British Co-

lumbia, Canada, pp. 31–39. https://doi.org/10.1145/3240323.3240354

McInerney, J., Lacker, B., Hansen, S., Higley, K., Bouchard, H., Gruson,

A., Mehrotra, R., 2018b. Explore, exploit, and explain: personalizing

explainable recommendations with bandits, in: Proceedings of the

12th ACM Conference on Recommender Systems, RecSys ’18. Associa-

tion for Computing Machinery, Vancouver, British Columbia, Canada,

pp. 31–39. https://doi.org/10.1145/3240323.3240354

McNee, S.M., Lam, S.K., Konstan, J.A., Riedl, J., 2003. Interfaces for Elici-

ting New User Preferences in Recommender Systems, in: Brusilovsky,

P., Corbett, A., de Rosis, F. (Eds.), User Modeling 2003, Lecture Notes in

Computer Science. Springer, Berlin, Heidelberg, pp. 178–187. https://

doi.org/10.1007/3-540-44963-9_24

McNee, S.M., Riedl, J., Konstan, J.A., 2006a. Being accurate is not enou-

gh: how accuracy metrics have hurt recommender systems, in: CHI

’06 Extended Abstracts on Human Factors in Computing Systems

- CHI EA ’06. Presented at the CHI ’06 extended abstracts, ACM

Press, Montr&#233;al, Qu&#233;bec, Canada, p. 1097. https://doi.

org/10.1145/1125451.1125659

McNee, S.M., Riedl, J., Konstan, J.A., 2006b. Making recommendations

better: an analytic model for human-recommender interaction, in:

CHI ’06 Extended Abstracts on Human Factors in Computing Sy-

stems - CHI EA ’06. Presented at the CHI ’06 extended abstracts, ACM

Press, Montr&#233;al, Qu&#233;bec, Canada, p. 1103. https://doi.

org/10.1145/1125451.1125660

Mesko, G., Bernik, I., 2011. Cybercrime: Awareness and Fear: Slovenian

Perspectives, in: 2011 European Intelligence and Security Informati-

cs Conference. Presented at the 2011 European Intelligence and Se-

curity Informatics Conference, pp. 28–33. https://doi.org/10.1109/

EISIC.2011.12

Milano, S., Taddeo, M., Floridi, L., 2019. Recommender Systems and

their Ethical Challenges. SSRN Journal. https://doi.org/10.2139/

ssrn.3378581

Mohamed, M.H., Khafagy, M.H., Ibrahim, M.H., 2019. Recommender Sy-

stems Challenges and Solutions Survey, in: 2019 International Confe-

rence on Innovative Trends in Computer Engineering (ITCE). Presented

at the 2019 International Conference on Innovative Trends in Compu-

Page 103: Do you trust me? - POLITesi

191190

Bibliography

ter Engineering (ITCE), IEEE, Aswan, Egypt, pp. 149–155. https://doi.

org/10.1109/ITCE.2019.8646645

Mohammadi, V., Rahmani, A.M., Darwesh, A.M., Sahafi, A., 2019. Trust-ba-

sed recommendation systems in Internet of Things: a systematic lite-

rature review. Human-centric Computing and Information Sciences 9.

https://doi.org/10.1186/s13673-019-0183-8

Montaner, M., López, B., de la Rosa, J.L., 2003. A Taxonomy of Recommen-

der Agents on the Internet. Artificial Intelligence Review 19, 285–330.

https://doi.org/10.1023/A:1022850703159

Morales, J., 2020. Remote Usability Testing 101 & Getting Started | Adobe

XD Ideas. Ideas. URL https://xd.adobe.com/ideas/process/user-te-

sting/remote-usability-testing/ (accessed 3.25.20).

Mordini, E., 2007. Technology and fear: is wonder the key? Trends in Biote-

chnology 25, 544–546. https://doi.org/10.1016/j.tibtech.2007.08.012

Mucko, P., Kokoszka, A., Skłodowska, Z., 2005. The comparison of coping

styles, occurrence of depressive and anxiety symptoms, and locus of

control among patients with diabetes type 1 and type 2. Clinical Dia-

betology 6, 240–249.

Najafi, I., 2012. The Role of e-Commerce Awareness on Increasing Electro-

nic Trust. Life Science Journal.

Nestik, T., Zhuravlev, A., Eduard, P., Marianna, S.C., Lioudmila, B., Piur-

cosky, F.P., Ferreira, J.V., 2018. TECHNOPHOBIA AS A CULTURAL AND

PSYCHOLOGICAL PHENOMENON: Theoretical Analysis. Interação

- Revista de Ensino, Pesquisa e Extensão 20, 266–281. https://doi.

org/10.33836/interacao.v20i1.191

Netflix, Inc, 2020. Netflix [WWW Document]. URL https://www.netflix.com/

browse (accessed 1.22.20).

Nielsen, J., 1994a. 10 Heuristics for User Interface Design: Article by Jakob

Nielsen [WWW Document]. Nielsen Norman Group. URL https://www.

nngroup.com/articles/ten-usability-heuristics/ (accessed 12.11.19).

Nielsen, J., 1994b. Enhancing the Explanatory Power of Usability Heuri-

stics, in: Proceedings of the SIGCHI Conference on Human Factors in

Computing Systems, CHI ’94. ACM, New York, NY, USA, pp. 152–158.

https://doi.org/10.1145/191666.191729

Nilashi, M., Jannach, D., Ibrahim, O. bin, Esfahani, M.D., Ahmadi, H.,

2016. Recommendation quality, transparency, and website quality

for trust-building in recommendation agents. Electronic Commerce

Research and Applications 19, 70–84. https://doi.org/10.1016/j.ele-

rap.2016.09.003

Norman, D.A., 2013. The design of everyday things, Revised and expanded

edition. ed. Basic Books, New York, New York.

North, A.C., Hargreaves, D.J., 1999. Music and Adolescent Iden-

tity. Music Education Research 1, 75–92. https://doi.

org/10.1080/1461380990010107

Overbeeke, C.J., Hummels, C.C.M., Soegaard, M., Dam, R.F., 2013. Indu-

strial design.

Paraschakis, D., 2018. Algorithmic and Ethical Aspects of Recommender

Systems in e-Commerce (Licentiate Thesis). Malmö university, Faculty

of Technology and Society. https://doi.org/10.24834/2043/24268

Paraschakis, D., 2017. Towards an ethical recommendation framework, in:

2017 11th International Conference on Research Challenges in Infor-

mation Science (RCIS). Presented at the 2017 11th International Con-

ference on Research Challenges in Information Science (RCIS), IEEE,

Brighton, United Kingdom, pp. 211–220. https://doi.org/10.1109/

RCIS.2017.7956539

Paraschakis, D., 2016. Recommender Systems from an Industrial and

Ethical Perspective, in: Proceedings of the 10th ACM Conference on

Recommender Systems - RecSys ’16. Presented at the the 10th ACM

Conference, ACM Press, Boston, Massachusetts, USA, pp. 463–466.

https://doi.org/10.1145/2959100.2959101

Parasuraman, R., Sheridan, T.B., Wickens, C.D., 2008. Situation Awa-

reness, Mental Workload, and Trust in Automation: Viable, Empiri-

cally Supported Cognitive Engineering Constructs. Journal of Co-

gnitive Engineering and Decision Making 2, 140–160. https://doi.

org/10.1518/155534308X284417

Park, D.H., Kim, H.K., Choi, I.Y., Kim, J.K., 2012. A literature review and

classification of recommender systems research. Expert Systems

with Applications 39, 10059–10072. https://doi.org/10.1016/j.

eswa.2012.02.038

Parra, D., Brusilovsky, P., 2015. User-controllable personalization: A case

study with SetFusion. International Journal of Human-Computer Stu-

dies 78, 43–67. https://doi.org/10.1016/j.ijhcs.2015.01.007

Paul, D., Kundu, S., 2020. A Survey of Music Recommendation Systems with

a Proposed Music Recommendation System. Advances in Intelligent

Systems and Computing 937, 279–285. https://doi.org/10.1007/978-

981-13-7403-6_26

Pessemier, T.D., Dhondt, J., 2015. TravelWithFriends: a Hybrid Group Re-

commender System for Travel Destinations 10.

Polácek, L., 2014. How to shuffle songs? Labs. URL https://labs.spotify.

com/2014/02/28/how-to-shuffle-songs/ (accessed 2.10.20).

Prasad, R., Kumari, V.V., 2012. A C ATEGORICAL R EVIEW OF R ECOMMEN-

DER S YSTEMS.

Pu, P., Chen, L., 2006. Trust building with explanation interfaces, in:

Proceedings of the 11th International Conference on Intelligent

User Interfaces - IUI ’06. Presented at the the 11th internatio-

nal conference, ACM Press, Sydney, Australia, p. 93. https://doi.

Page 104: Do you trust me? - POLITesi

193192

Bibliography

org/10.1145/1111449.1111475

Pu, P., Chen, L., Hu, R., 2012. Evaluating recommender systems from the

user’s perspective: survey of the state of the art. User Model User-A-

dap Inter 22, 317–355. https://doi.org/10.1007/s11257-011-9115-7

Pu, P., Chen, L., Hu, R., 2011. A user-centric evaluation framework for re-

commender systems, in: Proceedings of the Fifth ACM Conference on

Recommender Systems - RecSys ’11. Presented at the the fifth ACM

conference, ACM Press, Chicago, Illinois, USA, p. 157. https://doi.

org/10.1145/2043932.2043962

Rentfrow, P.J., Goldberg, L.R., Zilca, R., 2011. Listening, Watching, and

Reading: The Structure and Correlates of Entertainment Preferences.

Journal of Personality 79, 223–258. https://doi.org/10.1111/j.1467-

6494.2010.00662.x

Reppel, A., Szmigin, I., 2011. Data Doppelgänger: Addressing the Darker

Side of Digital Identity. ACR European Advances E-09.

Resnick, P., Varian, H.R., 1997. Recommender systems. Communications

of the ACM 40, 56–58. https://doi.org/10.1145/245108.245121

Ricci, F., 2015. Recommender systems handbook. Springer Science+Busi-

ness Media, New York, NY.

Rowe, P.G., 1987. Design thinking. MIT Press, Cambridge, Mass.

Sacharin, V., Schlegel, K., Scherer, K.R., 2012. Geneva Emotion Wheel Ra-

ting Study.

Saffer, D., 2010. Designing for interaction: creating innovative applications

and devices, 2nd ed. ed, Voices that matter. New Riders, Berkeley, CA.

Sahu, H., Sharma, N., Gupta, U., 2019. A New Framework for Collecting

Implicit User Feedback for Movie and Video Recommender System,

in: Khare, A., Tiwary, U.S., Sethi, I.K., Singh, N. (Eds.), Recent Trends

in Communication, Computing, and Electronics, Lecture Notes in

Electrical Engineering. Springer, Singapore, pp. 399–408. https://doi.

org/10.1007/978-981-13-2685-1_38

Salovey, P., Mayer, J.D., 1990. Emotional Intelligence. Imagination, Cogni-

tion and Personality 9, 185–211. https://doi.org/10.2190/DUGG-P24E-

52WK-6CDG

Santos, L.R., Montagna, G., 2018. Digital Ergonomics: Understanding the

Bridges to the Digital World, in: International Conference on Applied

Human Factors and Ergonomics. Springer, pp. 116–126.

Schafer, J.B., 2005. DynamicLens: A Dynamic User-Interface for a Me-

ta-Recommendation System 6.

Scherer, K.R., 2005. What are emotions? And how can they be measured?

Social science information 44, 695–729.

Schrier, K., 2016. Knowledge games: how playing games can solve pro-

blems, create insight, and make change, Tech.edu. Johns Hopkins Uni-

versity Press, Baltimore.

Sharma, A., 2016. Designing Interfaces for Recommender Systems

[WWW Document]. Medium. URL https://medium.com/the-graph/

designing-uis-for-recommender-systems-f7ffa2ca234f (accessed

1.15.20).

Sheehan, K.B., 2002. Toward a Typology of Internet Users and Online

Privacy Concerns. The Information Society 18, 21–32. https://doi.

org/10.1080/01972240252818207

Shenk, D., 1999. Data Smog: Surviving the Information Glut. HarperCollins

Publishers, New York, NY, USA.

Shneiderman, B., 2002. Leonardo’s laptop: human needs and the new com-

puting technologies. MIT Press, Cambridge, Mass.

Shullenberger, G., 2019. We All Wear Tinfoil Hats Now. The New Atlantis

87–98. https://doi.org/10.2307/26828529

Signorelli, A.D., 2020. È YouTube ad aver generato estremisti e complot-

tisti? Wired. URL https://www.wired.it/attualita/politica/2020/02/10/

youtube-radicalizzazione-estremismo-studi/ (accessed 2.10.20).

Simes, A., 2016. Bursting Filter Bubbles With Serendipity (Master Thesis).

Simon, H.A., 2008. The sciences of the artificial, 3. ed., [Nachdr.]. ed. MIT

Press, Cambridge, Mass.

Sinha, B.B., Dhanalakshmi, R., 2019. Evolution of recommender paradigm

optimization over time. Journal of King Saud University - Computer and

Information Sciences. https://doi.org/10.1016/j.jksuci.2019.06.008

Slater, M.D., 2007. Reinforcing Spirals: The Mutual Influence of Media Se-

lectivity and Media Effects and Their Impact on Individual Behavior and

Social Identity. Commun Theory 17, 281–303. https://doi.org/10.1111/

j.1468-2885.2007.00296.x

Soegaard, M., 2019. Hick’s Law: Making the choice easier for users [WWW

Document]. The Interaction Design Foundation. URL https://www.inte-

raction-design.org/literature/article/hick-s-law-making-the-choice-

easier-for-users (accessed 11.28.19).

Solanki, V.K., Díaz, V.G., Davim, J.P., 2019. Handbook of IoT and Big Data.

CRC Press.

Spotify AB, 2020. Spotify [WWW Document]. Spotify. URL https://open.

spotify.com/browse (accessed 1.23.20).

Stappers, P., Giaccardi, E., 2014. Research through Design | The Encyclo-

pedia of Human-Computer Interaction, 2nd Ed. [WWW Document]. URL

https://www.interaction-design.org/literature/book/the-encyclope-

dia-of-human-computer-interaction-2nd-ed/research-through-desi-

gn (accessed 11.25.19).

Stappers, P.J., 2014. Prototypes as a central vein for knowledge develop-

ment, in: Valentine, L. (Ed.), Prototype: Design and Craft in the 21st

Century. Bloomsbury Academic, pp. 85–97.

Steck, H., 2018. Calibrated recommendations, in: Proceedings of the 12th

Page 105: Do you trust me? - POLITesi

195194

Bibliography

ACM Conference on Recommender Systems - RecSys ’18. Presented at

the the 12th ACM Conference, ACM Press, Vancouver, British Colum-

bia, Canada, pp. 154–162. https://doi.org/10.1145/3240323.3240372

Steck, H., van Zwol, R., Johnson, C., 2015. Interactive Recommender Sy-

stems 2.

Sun, Z., Guo, Q., Yang, J., Fang, H., Guo, G., Zhang, J., Burke, R., 2019. Re-

search Commentary on Recommendations with Side Information: A

Survey and Research Directions. arXiv:1909.12807 [cs].

Swar, B., Hameed, T., Reychav, I., 2017. Information overload, psychologi-

cal ill-being, and behavioral intention to continue online healthcare in-

formation search. Computers in Human Behavior 70, 416–425. https://

doi.org/10.1016/j.chb.2016.12.068

Swearingen, K., Sinha, R., 2002. Interaction Design for Recommender Sy-

stems 10.

Swearingen, K., Sinha, R., 2001. Beyond Algorithms: An HCI Perspective on

Recommender Systems 11.

Symeonidis, P., Nanopoulos, A., Manolopoulos, Y., 2009. MoviExplain: a

recommender system with explanations, in: Proceedings of the Third

ACM Conference on Recommender Systems - RecSys ’09. Presented at

the the third ACM conference, ACM Press, New York, New York, USA, p.

317. https://doi.org/10.1145/1639714.1639777

Taghavi, M., Bentahar, J., Bakhtiyari, K., Hanachi, C., 2018. New Insights

Towards Developing Recommender Systems. Comput J 61, 319–348.

https://doi.org/10.1093/comjnl/bxx056

Tang, T.Y., Winoto, P., 2016. I should not recommend it to you even if you

will like it: the ethics of recommender systems. New Review of Hyper-

media and Multimedia 22, 111–138. https://doi.org/10.1080/1361456

8.2015.1052099

Te’Neil Lloyd, B., 2002. A Conceptual Framework for Examining Adolescent

Identity, Media Influence, and Social Development. Review of General

Psychology 6, 73–91. https://doi.org/10.1037/1089-2680.6.1.73

The Interaction Design Foundation, 2017. Design Thinking, Essential Pro-

blem Solving 101- It’s More Than Scientific [WWW Document]. The

Interaction Design Foundation. URL https://www.interaction-desi-

gn.org/literature/article/design-thinking-essential-problem-sol-

ving-101-it-s-more-than-scientific (accessed 3.6.20).

Theonlyandy, 2020. Human-centered design. Wikipedia.

Tibaldeo, R.F., 2015. The Heuristics of Fear: Can the Ambivalence of Fear

Teach Us Anything in the Technological Age? eip 6, 225–238. https://

doi.org/10.14746/eip.2015.1.9

Tintarev, N., Masthoff, J., 2007. Effective Explanations of Recommenda-

tions: User-Centered Design 4.

Tiropanis, T., INSCI (Eds.), 2015. Internet science: second International

Conference, INSCI 2015, Brussels, Belgium, May 27-29, 2015 ; procee-

dings, Lecture notes in computer science. Springer, Cham.

Tsai, C.-H., Brusilovsky, P., 2019. Exploring social recommendations with

visual diversity-promoting interfaces. ACM Transactions on Interactive

Intelligent Systems 10. https://doi.org/10.1145/3231465

Tsai, C.-H., Brusilovsky, P., 2017. Enhancing Recommendation Diversity

Through a Dual Recommendation Interface 7.

Tschimmel, K., 2012. Design Thinking as an effective Toolkit for Innovation

20.

Valentine, L. (Ed.), 2013. Prototype: design and craft in the 21st century.

Bloomsbury, London.

Valve Corporation, 2020. Steam Labs - Interactive Recommender [WWW

Document]. URL https://store.steampowered.com/recommen-

der/76561198136410519?snr=1_2500_4_ (accessed 1.23.20).

Varisco, L., 2019. Personal interaction design: introducing in the design

process the discussion on the consequences of the use of personal in-

formation (Tesi di dottorato). Politecnico di Milano, Milano.

Verbert, K., Parra, D., Brusilovsky, P., Duval, E., 2013. Visualizing recom-

mendations to support exploration, transparency and controllability,

in: Proceedings of the 2013 International Conference on Intelligent

User Interfaces - IUI ’13. Presented at the the 2013 international con-

ference, ACM Press, Santa Monica, California, USA, p. 351. https://doi.

org/10.1145/2449396.2449442

Vieler-Porter, A., 2019. MAB optimization makes testing faster and smar-

ter with machine learning. Frosmo. URL https://frosmo.com/multi-ar-

med-bandit-optimization-makes-testing-faster-and-smarter-wi-

th-machine-learning/ (accessed 1.5.20).

Vig, J., Sen, S., Riedl, J., 2009. Tagsplanations: Explaining Recommenda-

tions Using Tags 10.

Wachowski, Lana (as The Wachowski brothers), Wachowski, Lilly (as The

Wachowski brothers), 1999. The Matrix.

Wang, W., 2005. Design of trustworthy online recommendation agents:

Explanation facilities and decision strategy support. University of Bri-

tish Columbia, Vancouver.

Warnesta, P., 2005. Modeling a Dialogue Strategy for Personalized Movie

Recommendations 6.

West, P.M., Ariely, D., Bellman, S., Bradlow, E., Huber, J., Johnson, E.,

Kahn, B., Little, J., Schkade, D., 1999. Agents to the Rescue? Marketing

Letters 10, 285–300. https://doi.org/10.1023/A:1008127022539

What is Design Thinking? [WWW Document], n.d. . The Interaction Design

Foundation. URL https://www.interaction-design.org/literature/topi-

cs/design-thinking (accessed 3.5.20).

What is User Centered Design? [WWW Document], n.d. . The Interaction De-

Page 106: Do you trust me? - POLITesi

197196

Bibliography

sign Foundation. URL https://www.interaction-design.org/literature/

topics/user-centered-design (accessed 3.5.20).

Wills, C.E., Zeljkovic, M., 2011. A personalized approach to web privacy:

awareness, attitudes and actions. Information Management & Compu-

ter Security 19, 53–73. https://doi.org/10.1108/09685221111115863

Wilson, M.C., 2019. I Read About “Design For Trust” So You Don’t Have To –

Simply Secure [WWW Document]. URL https://simplysecure.org/blog/

design-trust (accessed 2.6.20).

Witlox, F., 2015. Beyond the Data Smog? Transport Reviews 35, 245–249.

https://doi.org/10.1080/01441647.2015.1036505

Wolfson, S., 2018. Amazon’s Alexa recorded private conversation and sent

it to random contact. The Guardian.

Wong, D., Faridani, S., Bitton, E., Hartmann, B., Goldberg, K., 2011. The

diversity donut: enabling participant control over the diversity of re-

commended responses, in: CHI ’11 Extended Abstracts on Human

Factors in Computing Systems, CHI EA ’11. Association for Compu-

ting Machinery, Vancouver, BC, Canada, pp. 1471–1476. https://doi.

org/10.1145/1979742.1979793

Wu, C.-Y., Alvino, C.V., Smola, A.J., Basilico, J., 2016. Using Navigation to

Improve Recommendations in Real-Time, in: Proceedings of the 10th

ACM Conference on Recommender Systems - RecSys ’16. Presented

at the the 10th ACM Conference, ACM Press, Boston, Massachusetts,

USA, pp. 341–348. https://doi.org/10.1145/2959100.2959174

Youn, S., 2009. Determinants of Online Privacy Concern and Its Influence

on Privacy Protection Behaviors Among Young Adolescents. Journal

of Consumer Affairs 43, 389–418. https://doi.org/10.1111/j.1745-

6606.2009.01146.x

Yumansky, S., 2008. Virtual Identity: Applying Narrative Theory to Online

Character Development. Stream: Inspiring Critical Thought 1, 40â–52.

Zabaleta Etxebarria, N., Igartua López, J.I., Errasti Lozares, N., Markuer-

kiaga Arritola, L., Mondragon Goi Eskola Politeknikoa, 2012. Project

Management in the wave of Innovation, exploring the links.

Zhang, S., Yao, L., Sun, A., Tay, Y., 2019. Deep learning based recommender

system: A survey and new perspectives. ACM Computing Surveys 52.

https://doi.org/10.1145/3285029

Zhu, H., Xiong, H., Ge, Y., Chen, E., 2014. Mobile app recommendations

with security and privacy awareness, in: Proceedings of the 20th ACM

SIGKDD International Conference on Knowledge Discovery and Data

Mining, KDD ’14. Association for Computing Machinery, New York, New

York, USA, pp. 951–960. https://doi.org/10.1145/2623330.2623705

Zimmerman, J., Forlizzi, J., Evenson, S., 2007. Research Through Design

As a Method for Interaction Design Research in HCI, in: Proceedin-

gs of the SIGCHI Conference on Human Factors in Computing Sy-

stems, CHI ’07. ACM, New York, NY, USA, pp. 493–502. https://doi.

org/10.1145/1240624.1240704

Zimmerman, J., Stolterman, E., Forlizzi, J., 2010. An Analysis and Critique

of Research Through Design: Towards a Formalization of a Research

Approach, in: Proceedings of the 8th ACM Conference on Designing

Interactive Systems, DIS ’10. ACM, New York, NY, USA, pp. 310–319.

https://doi.org/10.1145/1858171.1858228

Page 107: Do you trust me? - POLITesi

Thanks

Page 108: Do you trust me? - POLITesi

200

Acknowledgments

I would like to thank...

My parents to invest so much on my future and my family for always being of great support.

Anna for what her presence meant for me during the path, for what having her beside me in this

achievement means and for what she will mean in my future.

My cousin, Simone, for being a brother and for everything he shared with me, in life and in the last 3

years.

My brother, Alessio, hoping to be the best model I can for him and for his future, despite everything

that divide us.

Alessia, Andrea, Andrea, Cristina, Emma, Giacomo e Marco, friends of a lifetime, to always be there

and remind me where I come from.

Adele, Anna, Elena, Emilia, Federica, Iacopo, Laura, Luna, Marta, Pierstefano, Roberto, Sandra, Si-

mone e Vanni, friends of “Misandria”, that shared with me all of this journey.

Caterina for proving to be an unvaluable friend, offering me one of the biggest opportunities of my

life, that is to grow both as a professional and as a person.

Marco for his naivety, honesty and the intimacy of our friendship, for always being an endless inspira-

tion for my mind in all the discussion and experiences in wich we involved each other.

Guys from Alserio 1, Alessandro, Luca, Marco, Simone e Tommaso for welcoming me in this city and

make me grow fond of it.

The participant to the final part of this research Sandra, Marco, Francesca, Paola, Cristina, Mirko,

Riccardo, Tiziana e Simone, for their kindness and essential contribution.

My tutor, Laura, for guiding me along this hard work never abandoning me, while always leaving enou-

gh space for growing and fend for myself.

All the teachers of my life, the ones I estimated and the ones I disregarded, for contributing in a va-

riety of ways to bring me where I am today, step by step, always placing in front of me the right

obstacles for my development.

Here ends my education journey, with the hope and intention to never stop learning and improving.

Here starts a new stage of life, hoping to be able to contribute and impact on the reality we live

in, fulfilling the expectations and returning at least partially the immesaurable value of the expe-

riences received from every person that has, is and will be part of my life.

201

Ringraziamenti

Vorrei ringraziare...

I miei genitori per aver investito così tanto sul mio futuro e tutta la mia famiglia per essere sempre

stata di grande supporto.

Anna, per ciò che ha significato per me la sua presenza lungo tutto il percorso, per ciò che significa

essere al mio fianco in questo traguardo e per quello che significherà nel mio futuro.

Mio cugino, Simone, per essere come un fratello maggiore e per tutto quello che ha condiviso con me,

nella vita, e in questi ultimi 3 anni.

Mio fratello, Alessio, sperando di poter essere il miglior esempio possibile per lui e per il suo futuro,

nonostante tutto quello che ci divide.

Alessia, Andrea, Andrea, Cristina, Emma, Giacomo e Marco, gli amici di una vita, per esserci sempre

e non farmi mai dimenticare da dove vengo.

Adele, Anna, Elena, Emilia, Federica, Iacopo, Laura, Luna, Marta, Pierstefano, Roberto, Sandra, Si-

mone e Vanni, gli amici di “Misandria” che mi hanno accompagnato lungo tutto questo percorso.

Caterina per essersi rivelata un amicizia inestimabile offrendomi una delle opportunità più grandi

della mia vita, quella di crescere come professionista ma soprattutto come persona.

Marco per la sua innocenza, onestà e l’intimità della nostra amicizia, per essere sempre stato uno

stimolo inesauribile per la mia mente in tutte le discussioni e le esperienze in cui ci siamo coin-

volti a vicenda.

I ragazzi di via Alserio 1, Alessandro, Luca, Marco, Simone e Tommaso per avermi accolto in questa

città e avermi fatto affezionare.

I partecipanti alla parte finale della ricerca, Sandra, Marco, Francesca, Paola, Cristina, Mirko, Ric-

cardo, Tiziana e Simone, per la loro disponibilità e il loro indispensabile contributo.

Il mio tutor, Laura, per avermi guidato attraverso questo lunghissimo lavoro senza abbandonarmi

mai, ma lasciandomi sempre lo spazio di crescere e cavarmela da solo.

Tutti gli insegnanti della mia vita, quelli stimati e quelli disprezzati, per aver contribuito nei modi più

svariati a farmi arrivare dove sono in questo momento, passo dopo passo, ponendo di fronte a me

gli ostacoli giusti per il mio sviluppo.

Qui si conclude il mio percorso di formazione, con la speranza e l’intenzione di non smettere mai di

imparare e migliorare. Qui inizia una nuova fase della mia vita sperando di poter dare il mio con-

tributo e avere il mio impatto sulla realtà che viviamo, rispettando le aspettative e restituendo

almeno in parte l’inestimabile valore dell’esperienza ricevuta da tutte le persone che hanno fatto,

fanno e faranno parte della mia vita.

Page 109: Do you trust me? - POLITesi

202

Page 110: Do you trust me? - POLITesi

Trust in technology is tricky. It exposes to risks when it is too high or lead

to miss important opportunities when it is too low. Reciprocal aware-

ness through communication seems to be a good solution to guarantee ba-

lance in this situation.

Recommender systems are information retrieval technologies that

support the personalisation of content in digital services and with the ad-

vent of big data, they became ubiquitous to challenge information overlo-

ad. With their complexity and the introduction of machine learning, they

are the perfect example of opaque technology (black-box). Their applica-

tion raises several ethical issues and recommendations are often sources

of privacy concerns and the perception of being spied on, making them the

perfect environment to explore the themes of trust in technology.

Recommender systems received a lot of attention on the technical side

and the optimisation of algorithms while little has been done on the side of

human-computer interaction in the last years. For this reason, it is crucial

to explore what user experience design can do to contribute to the deve-

lopment of this kind of technology.

Starting from an established user experience evaluation framework for

recommender systems, this thesis introduces the concept of Dialogue ba-

sed on the acknowledged concepts of Transparency and control. The rese-

arch aims at experimenting with this new concept and exploring its effects

on Trust towards recommender systems, to demonstrate its efficacy and

legitimate its application. The research is conducted through the means

of design and its processes, and its goal is to understand how to evaluate

the quality of the key concepts in existing recommender systems or during

the design process and to identify a set of good design patterns to imple-

ment Dialogue in interactive recommender systems.Based on the results

from this experimentation, the output of the thesis is a set of guidelines

for the design of trustworthy recommender systems based on the concept

of Dialogue.