POLITECNICO DI MILANO€¦ · order to introduce radical design-driven innovations. Verganti has defined these radical design-driven innovations, Technology Epiphanies, meaning that
Post on 08-Oct-2020
2 Views
Preview:
Transcript
POLITECNICO DI MILANO
Master in Management Engineering
Implementing a Technology Steering Strategy:
Analysis of a General Purpose Technology
Thesis supervisior
Professor Claudio Dell’Era
Co-Supervisor
Stefano Magistretti
Master student
Alessandra Bosetti (838765)
Academic Year 2016/2017
II
III
“Every time we’ve moved ahead in
IBM, it was because someone was
willing to take a chance, put his head
on the block, and try something new”
Thomas J. Watson
IV
V
Abstract
This research focuses on uncovering the main dynamics driving
the technology steering process, in the case of General Purpose
Technology. Technology steering, described for the first time
by the Design-Driven Innovation theory, is defined as the
process through which a company identify the different
applications a technology can address. Technology innovation
has often been merely applied to support technology
substitution, resulting in a profit loss for the innovator, who
fails in seize the whole technology potential, and for the entire
economy, which suffer for an innovation slowdown. The
existing literature poses its attention more in interpreting the
applications rather than on the technology development and
integration. The objective of this thesis is to investigate the
latter, taking advantage of an inspiring case study, the IBM
cognitive technology named Watson. It is a General Purpose
Technology, though, it finds application in many different
fields. Technology steering was defined as an excellent
commercialisation strategy for General Purpose Technologies.
Moreover, this thesis points out the existence of a divergent
phase, starting with the launch of the first commercial product
on the market, which sees the exploration of the technology
potential applications through a series of activities organised by
the company.
VI
VII
Abstract
Questa ricerca ha l’obiettivo di rivelare le principali dinamiche che
guidano il processo di technology steering, prendendo il caso specifico
delle General Purpose Technology. Il processo di technology steering,
che è stato descritto per la prima volta all’interno della teoria del
Design Driven Innovation, è definito come il processo tramite il quale
un’azienda identifica le diverse applicazioni che una tecnologia può
supportare. L’innovazione tecnologica è spesso stata applicata con il
mero scopo di sostenere una sostituzione tecnologica, causando una
perdita di profitto per l’innovatore, che non riesce ad impadronirsi
dell’intero potenziale della tecnologia, e per l’intera economia che
subisce un rallentamento dell’innovazione. La letteratura attuale ha
posto più attenzione sull’interpretazione delle applicazioni sviluppate,
che sullo sviluppo e l’integrazione della tecnologia in sé. L’obiettivo di
questa tesi è di investigare quest’ultimo aspetto, tramite lo studio di un
caso stimolante, la tecnologia cognitiva di IBM, Watson, che, essendo
una General Purpose Technology, è applicabile a molti ambiti diversi.
Il Technology Steering risulta essere un’ottima strategia di
commercializzazione per le General Purpose Technology. Inoltre,
questa tesi indica l’esistenza di una fase divergente, che inizia con il
lancio sul mercato del primo prodotto, e vede l’esplorazione delle
potenziali applicazioni della tecnologia tramite una serie di attività
organizzate dall’azienda.
VIII
IX
Executive Summary
Introduction
In 2001, Leedy defines what research is, adopting a utilitarian point of view. His definition is:
“Research is a procedure by which we attempt to find systematically, and with the support of
demonstrable fact, the answer to a question or the resolution of a problem.”.
Given that the aim of research is to find an answer to a question, the first phase of a research process
is, naturally, the problem setting, which is, in other words, the identification of the research question.
The problem setting phase imply that a certain area of research has already been identified and that
the topic is of interest for the research community. The objective of this first phase is to analyse the
present situation, that means, understanding which literature fields have already been exploring the
selected topic, identifying which case studies have been considered to draw the present conclusions,
selecting the target case study, or case studies, that might be investigated to carry on the analysis in
the chosen research area, and, finally, sketching out the guidelines to perform the research.
People are continuously surrounded by unanswered questions, unresolved problems, with
conjecture and unproven beliefs. Researchers aim at answering some of these questions,
increasing understanding by interpreting facts or ideas and reaching some conclusions about their
meaning. Given the amount of possible questions and the continuous arise of new ones, research
is reiterative in its activities, which begins with addressing the questions whose answer might
have the higher impact on their background. Advances in knowledge and interpretations of facts
are based on previous knowledge, which, in turn, is expanded by the advances. Then, it is
important to highlight that, the resolution of research problems often gives rise to further
problems which need resolving.
This chapter presents the process and the background that leaded to the birth of this research
paper. In particular it describes the identification of a relevant research question, which happened
X
starting from a broad research area and narrowing the problem until it was possible to find a
valuable answer, given the time constraints and the limited access to information that, this
research work, had to respect.
Problem Setting
The first task when starting a research work is the analysis of the present body of knowledge
around the topic of interest, which means, reviewing the literature, both from traditional academic
sources, or business reviews’ latest articles. The objective of the initial review of the literature is
to discover relevant material published about the chosen field of study and to search for a suitable
problem area. The typology of these articles should be rather general, because initially, it is better
to define no more than a problem area, rather than a specific research problem, within the general
body of knowledge of interests. Talking about this paper, the research area that was identified in
the first instance was the management of the innovation process, defined in general terms.
After having identified the broad literature field the research is addressing, the aim is to narrow
down the scope of the idea or problem, step by step, until it becomes a highly specific research
problem. This narrowing process requires a lot of background reading in order to discover what
has already been written about the subject, what research has been carried out, where further work
needs to be done and where controversial issues still remain. Considering the case of this very
research, the path that led to the definition of the specific problem, object of this research work,
followed these subsequent steps: from the general innovation management to the technology
innovation management. Then the area has been further narrowed selecting a specific type of
innovation, which is the one described by the Design-driven innovation theory, and finally, also
the type of technology has been double narrowed, focusing not only on General Purpose
Technologies, but on those with a digital nature.
Moving from the interest in a wider issue, to define the research problem more closely so that it
becomes a specific research problem, requires an enquiring mind, an eye for inconsistencies and
inadequacies in current theory and a measure of imagination. In fact, every step through the
XI
definition of the specific research problem of this thesis has been the results of the scrutinise of a
number of publications about the topic, both with general argumentations and very detailed ones.
The narrowing process has the aim of identifying a research question with well-defined
boundaries, that would be possible to answer within a paper work. This is because every piece of
research contributes only to a small part of a greater body of knowledge or understanding. In fact,
every new research should collect all the available pieces of information and try to figure out how
to build on them to increase the whole body of knowledge.
In the same way as it is useful to delineate a specific research area, it is often useful to pose a
simple question. It was with this objective in mind that the research question of this work, which
has evolved many times with the narrowing of the scope, has been finally conceived as follow:
“How can companies steer a General Purpose Technology
in order to integrate it into meaningful application fields?”
Figure 1 – Narrowing the Research Area
XII
Here after, the description of the process that have led to the understanding of the relevance of the
defined research question.
Literature regarding innovation processes and dynamics has evolved over time. Summarizing the
literature, since the 80s, one used to define two different approaches, the first pointing to market
forces as the main determinants of technical change (the so-called Market-pull) and the second
defining technology as an autonomous or quasi-autonomous factor, at least in the short run
(Technology push) (Dosi,1982). Recent studies in the innovation field has brought a third focal
dimension to the traditional innovation literature: design. This new dimension moves from the
intuition that sometimes companies fail in fully exploiting the opportunities provided by the
emerging technologies because they see them myopically, only as substitutes for previous
technologies, without exploring the entire range of applications the technology might enable. When
technology innovation has the only objective of finding a substitute for the old technology, the result
is an opportunity loss. The substitution perspective, imply that other interesting technological
applications, arising sometimes in completely different application fields, are found only a long
period after the technology became available on the market, and, this delay, results in a loss of
potential profit for the innovator. In turn, the fact that the innovator focuses only on a portion of the
profits she may actually make with the technology innovation, stifles the innovation’s investments.
The smaller the revenues from the innovation are forecasted to be, the fewer the incentives in
investing in it. The ultimate consequence of this myopic behaviour is that it causes a slowdown in the
innovation pace of the entire economy, which, of course, should be avoided. To avoid this loss, new
theories and managerial models that support companies to integrate the design perspective in their
strategy have recently made appearance in the innovation research landscape. One of the biggest
contribution to this literature field, lie in the design-driven innovation theory (Verganti 2008). Design-
driven innovation is based on the idea that each product holds a particular meaning for consumers.
Usually, theories of the management of innovation assume that design becomes relevant in the mature
stages of industries (if ever). However, recent evidence shows that the radical innovation of product
meanings is a key factor in the beginning stages of an industry’s development, when technology is
still fluid. Companies that interpret technology as an enabler of new product meanings mix research
activities related to new technologies and studies about emerging lifestyles and societal values in
order to introduce radical design-driven innovations. Verganti has defined these radical design-driven
innovations, Technology Epiphanies, meaning that each technology embeds a set of disruptive new
meanings that are waiting to be uncovered. Only revealing those quiescent meanings, a company can
seize the technology’s full value.
XIII
The body of literature just described, has represented the baseline of the entire research work. From
the interest in exploring all the details connected with this new born literature stream, to the
identification of the current knowledge gaps and the definition of the research process to try to answer
them.
Gaps in the Current Knowledge
One of the first tasks, on the way to deciding on the detailed topic of research is to find a question, an
unresolved controversy, a gap in knowledge or an unrequited need within the chosen subject. This
search requires an awareness of current issues in the subject and an inquisitive and questioning mind.
The problem should be significant, as it is not worth time and effort investigating a trivial problem or
repeating work which has already been done elsewhere.
As discussed in the previous paragraph, design-driven innovation has addressed the fact that
technologies embed many different meanings and, that from the unveiling of those meanings, it is
possible to identify many different applications enabled by a single technology. The process through
which a company can identify all the different applications a technology can enable, is called,
technology steering. Design-driven companies usually transfer applications across industries rather
than introduce new-to-the-world technologies. In fact, a peculiar aspect of the application of the
technology in order to develop many different marketable solutions, is that research and development
departments aim at discovering existing technologies adopted in other industries, as new meanings
are often found outside the typical company’s ecosystem. As already said, the design-driven
innovation is built around the concept of a product “meaning”, which is intended as the way the user
perceives it. Each user has a different perception of a certain product or service, for this reason,
literature stresses the importance of shifting the attention from the “what” (product or service features)
or the “how” (product or service interface) to the “why” (the purpose for which it is used), in order
to identify a technology epiphany. An important contribution to the development of the design-driven
innovation theory comes from the hermeneutics. Hermeneutics is derived from the Greek word
ἑρμηνεύω (hermeneuō) "translate, interpret". Interpretation is a necessary ingredient while dealing
with meanings, that, by definition, are the results of an interpretative process. The solution comes as
a natural consequence, once a new interpretative paradigm is generated. One of the biggest differences
between technologies and meanings is that, meanings, are significantly context-dependent. What is
XIV
meaningful for users depends on the socio-cultural context in which a product is used, something that
may vary considerably over time and space. Because of their marked social implications, a radical
change of meaning is often co-generated, which means that meanings cannot be defined by
businesses, but are given by users immersed into a socio-cultural context. The exploration of radically
new meanings is a process of generative interpretation that leads to the co-generation of new
meanings which involves many different actors, in fact, the interpretations of the meaning of a product
occur through continuous interactions among firms, designers, users, and several stakeholders, both
inside and outside a corporation. Knowing the importance of the social dimension in shaping new
meanings, hermeneutics focuses on the dynamics through which new meanings arise. A central role
is given to the interpreter and to an active search for a diversity of interpretations, stemming both
from the interpreter herself but also from the external world.
Summarizing what the literature has said until now, an important acknowledgement has been that in
markets where everyone can easily gain access to new technologies, the big winners often are not the
companies that obtain them first and use them to enhance existing products. They are the companies
that understand how those technologies can be used to create better customer experiences than
existing applications do. And the biggest winners will be companies that learn to systematically
produce one technology epiphany after another. This leads to the process through which technology
epiphanies can be generated. On this purpose, literature, gives extreme importance to the role of the
interpreters and to the internal and external network a company should build. In fact, it is through the
understanding of the context variables and the identification of the interpreters’ values and
perspectives, that new meanings, the seeds of any technology epiphany, can be unveiled. Even though
this theory represents a fundamental step in the technology steering investigation process, the path to
a complete understanding of its different phases, the development dynamics and managerial
implications, is still far to be completed. In fact, despite the technology epiphany phenomenon was
defined by Verganti more than eight years ago, no procedures, methodologies and guidelines have
been identified by the recent researches. Indeed, if we consider that from Verganti (2009) to Buganza
et al. (2015), the unit of analysis was usually the application and not the technology in itself the gap
is evident. As previously reported, in the existing literature the attention and the focus were positioned
more in interpreting the applications rather than on the technology development and integration per
se. The fact that the theory does not explain, in a systematic way, how companies should act in order
to successfully unveil the different meanings, how the strategy should change in accordance with the
different phases of the innovation process, what is the role of the executives, the one of the researchers
and the one of a company’s employees offers a fertile ground over which new theories can born and
flourish. The identified literature gap is as interesting to be investigated as broad to be solved in a
XV
single research work. For this reason, the scope of the thesis has been further narrowed, identifying a
research structure that can lead to develop a contribution to the advancement of the design-driven
innovation’s literature stream.
Research Objectives
In order to carry on the research, considering time and resources constraints, it has been necessary to
divides the principal question, or problem, into more practicable sub-questions or problems. The
identified gap, presented in the previous paragraph, is, as often happens in the research projects, too
large and abstract to be examined as a whole. By dividing it into component parts (sub-problems) a
practical investigation become feasible. At this stage, the nature of the question was still very broad,
but it gave some indications of the type of research approach which could be appropriate. Being
technology steering a topic whose dynamics have only been roughly sketched, the better way to start
investigating them is through case studies. Case study analysis offers an easy path to unveil the
dynamics a company has drawn and followed in order to succeed in applying a certain strategy. Even
though this method has evident limitations caused by the specificity of the selected case, it is possible,
by performing a series of standardisation steps, to earn some general knowledge about the topic of
interest.
The case studies’ selection is not an easy task, and imply an in-depth analysis of the different available
options. One of the variable that should be taken into consideration while choosing a case study are
the actual possibilities to obtain the information required. It is not possible to carry out a research if
failing to collect the relevant information needed to tackle the problem, either because of a lack of
access to documents or other sources, or because of a failure in obtaining the co-operation of
individuals or organizations essential to the research. Moreover, it is necessary to decide whether to
consider a single case or many. The selection of a single case study is also an immediate way to
narrow the research area down. In fact, by selecting a unique case, the multitude of possible
variables that it is necessary to take into account during the research is shrink to a manageable
number. For this reason, and because of the availability of primary and secondary sources, this
research work has been based on a single case study: the Watson technology, implemented by
IBM.
XVI
The case study selected had the effect of further narrowing the research scope, as after the initial
phase of analysis, it was clear that the type of technology which is Watson has many interesting
peculiarities. In fact, it is classifiable in the group of General Purpose Technologies, which are
technologies characterised by having substantial and pervasive societal and economic effects. GPTs
can be applied to different markets, they are improved rapidly and form the basis for a wave of
complementary innovations in a number of diverse existing industries, hence sustaining and
enhancing economic growth (Bresnahan and Gambardella, 1998, Gambaredella e Giarratana, 2015).
These characteristics are due to their high level of technological generality, often referred as
generality of purpose, that is, the fact that it performs some generic function that lies at the heart of
very many actual or potential products and production systems. Most GPTs play the role of “enabling
technologies”, opening up new opportunities rather than offering complete final solutions. The idea
of a technology solution that, by definition, can be applied across multiple domains made it the right
case study to analyse. Recent literature describes commercialization strategies that launch and diffuse
GPTs in the market. The dominant business model in market for technologies is based on the idea of
developing a technology for licencing to downstream specialists. These business model is becoming
popular also in commercializing GPTs, because the fact that they are constructed in a way that can
be employed by different potential downstream licensees, make licensing particularly profitable.
However, a last detail that made the case study even more specific, is the fact that the technology
under analysis is a digital technology. This aspect determines a series of variables to take into account
that differentiate it from traditional GPTs and that ended up requiring a different commercialisation
strategy.
XVII
Case Study Analysis and Results
The body of this research is articulated around a case study, which, through its mechanisms, can help
unveil the intricate dynamics and rule that companies nowadays has to follow while launching a new,
disruptive General Purpose Technology. As anticipated early in the paper, the chosen case study is
IBM Watson. The analysis and the data collection has been ongoing for the whole time of the
research, because IBM Watson is continuously growing, updating, expanding, changing and
improving. IBM Watson is an efficient analytical engine that pulls many sources of data together in
real-time, discovers an insight, and deciphers a degree of confidence. These characteristics are
broadly identified with the name of cognitive computing and IBM Watson is considered to be the
enabling technology to build a cognitive business. Over the last 20 years, IBM has worked to advance
the field of AI, and, in 2006, they took the challenge of creating an intelligent system, able to compete
in the famous questions game Jeopardy!. It took years of intense research and development, by a core
team of about 20 researchers, to implement IBM Watson. In the early stages of the research, at least
until 2006, the team efforts failed to produce promising results, and consequently, to have significant
impact on Jeopardy!. To progress with the results, the research team ended up overhauling nearly all
of its dynamics, including both internal and external activities. Finally, in February 2011, super
computer Watson came away victorious during Jeopardy!, winning with a commanding lead of
$77,147 after three days of play.
At the beginning of the research, Watson was still a small project and thoughts of commercialisation
were not uppermost in anyone's mind: the Grand Challenge, how IBMers used to call it, was a
demonstration project, whose return for the company was more in the buzz it created than in a
contribution to the bottom line. Commercialisation happened, in some way, unexpectedly, as Nicola
Palazzo, the Italian Watson Leader, stated during the interview.
Watson started out as a single natural language, Question Answering API (Application Programming
Inteface); today, it consists of more than 50 technologies. It works on Bluemix, a cloud platform that
gives easy access to everybody who’s interested in the technology. Without Bluemix, Watson would
have been a very expensive software, only wealthy companies would have access to. Another
fundamental aspect in the development of the Watson offering has been the creation of an ecosystem.
IBM Watson Ecosystem launched in November 2013, and it is a partner program for companies and
start-ups to leverage IBM Watson services. To date, more than 1,500 individuals and organizations
XVIII
have contacted IBM to share their ideas for creating cognitive computing applications that redefine
how businesses and consumers make decisions.
The commercialisation of a new technology is a tricky phase for any company that wants to innovate,
whether it is a stat-up or a large company. Probably because of the awareness regarding the
complexity of its mechanisms, and the lessons learnt from its long history of innovation, the launch
of Watson has been scrupulously planned. A number of initiatives, both internal and external to the
company has been implemented, and the process is still ongoing. What IBM is doing through its
commercialization strategy, is two-folded: on one hand, it is working to build trust and confidence
around Watson, thanks to big and largely advertised conferences, on the other hand, it is keeping
exploring the technology possibilities and enlarge its reaching through a series of internal and external
activities. Watson is classifiable in the group of General Purpose Technology as it perfectly meets the
GPT definition given by Bresnahan and Trajtenberg in 1996. A General Purpose Technology should
possess the following characteristics:
1. Pervasiveness: the GPT should spread to most of the sectors and Watson has already proved
its applicability to a high number of sectors;
2. Improvement: the GPT should get better over time and, hence, should keep lowering the costs
of its users. Watson capabilities are getting better over time and, low cost applications have
already been developed;
3. Innovation spawning: the GPT should make it easier to invent and produce new products or
processes. The Watson technology is providing enormous possibilities of innovation in every
sector, granting companies access to improved performances and processes. The resort to the
ecosystem and the open source Bluemix platform are making it easier to invent and produce
new products or processes. Watson has, in addition, the peculiarity of being a digital
technology, a characteristic that opens up a vast series of managerial and strategical
implications.
As discussed in the literature about GPTs, lately, the preferred commercialisation method, have been
the licensing. However, the case, marks an evident change in the General Purpose Technology
commercialisation strategy, bringing important implications to the current literature. the study of
Watson paves the way to a new possible strategy: the technology steering, which is defined as the
process through which a company can identify all the different applications a technology can enable.
At a macro level, it is possible to highlight differences regarding the phases of the commercialisation
process and the company’s involvement in these phases. As already said, for what concern licensing,
the company does not get involved into the integration of the technology to create commercially
XIX
viable solutions. Its involvement ends with the development. Instead during the steering, the company
maintains the ownership of the technology also during the commercialisation.
The different involvement level above presented opens up to a fundamental consequence, which is
one of the main differences between the two business models. Comparing to the technology licensing,
the alternative path of technology steering, that emerged through the case study, imply an extended
ownership of the technology. In fact, by following the technology evolution until it lands on the
market, until it is integrated into meaningful applications, it is possible for the company to maintain
a deep ownership of it, not only at a formal level but, by actually maintaining the knowledge about
its dynamics and functioning, about the use that is made of it in the market. The main consequence
of keeping ownership on the technology, is that the company has the possibility to understand how to
improve it and upgrade it in order to match growing and changing market demand. IBM has identified,
as a suitable method to constantly update the technology, the use of APIs. By dividing the entire
technology in individual building blocks, it is possible to easily integrate and enlarge its features.
Figure 2 – Different Involvement of a Company During Technology Licensing or Steering
(Verganti, 2009)
XX
Another element that was possible to uncover through the study of Watson was that, when a company
pursues a technology steering strategy, with a GPT, it has to undergo a divergent phase, from the
market launch of the technology, Watson in this case, that lead to the uncover of many possible
applications.
The study allowed the identification of the mechanisms needed to obtain the divergent effect. A
company that wants to steer its GPT, should implement the following two set of activities, coherently
with the life-cycle phase of the technology in each application field:
1. Activities aimed at identifying new market opportunities (unveiling new meanings):
a. Research: mainly in the first phases of the commercialisation, or, in projects aiming
to upgrade and improve the technology;
b. Partnership: in the first phases of the commercialisation, in secure markets and to
consolidate the technology market presence;
c. University Program: during the exploration phase, after the first applications have
been launched, to involve fresh talents with the technology for a short-medium time
(months) in order to unveil development possibilities in complex fields;
d. Hackathons: during the exploration phase, after the first applications have been
launched, to crowdsource many ideas, mostly from non-traditional backgrounds, at a
low cost.
2. Support activities:
Figure 3 – Technology Steering (adapted from Philips J., 2011)
XXI
a. Sale strategy: differently from licensing which is extremely costly for the licensees
and, tough, limited to a small number of large, wealthy companies, technology steering
should search for sale strategies with a restrained cost, affordable by anyone;
b. Product improvement: technology steering requires a constant update and
modification of the basic technology to adapt it to the most disparate sectors, and
companies should find an easy and cheap method to obtain the required advancement;
c. Promotion strategies: companies that want to steer a technology should find a way to
obtain a huge echo, spanning many different application fields, and going beyond the
traditional industries, in order to catch and unveil new technology meanings;
d. Customers’ education: when a completely new technology is developed, potential
customers should learn how to use it. For these reasons, a company trying to steer a
technology should provide easy access to support and learning methods;
e. Internal transformation: the entire company should be following and sustaining the
technology during the steering process.
Through the careful management of these activities a company should be able to support the
commercialisation of its newly developed General Purpose Technology, by playing a central role
throughout the entire process. As already said concerning the Design Driven Innovation, in order to
identify meaningful applications, it is necessary to identify new meanings that the technology can
embed. On this purpose, it is particularly relevant to build a network of interpreters. By looking at the
activities IBM is implementing to discover marketable applications, it is evident the role they play in
building a network around the technology. Partners, start-uppers, students and other actors that might
be involved in the activities belong to very different background and they all have a different point
of view, or interpretation of the technology, that can help it grow and flourish.
Thanks to the study of Watson, it was possible to adapt the main market and technology
characteristics to successfully implement technology licensing, which are, the market fragmentation,
the company’s size and the technology applicability, to the technology steering case. The result is a
list of five characteristics that are fundamental to successfully apply a technology steering strategy:
1. Market fragmentation: the higher the fragmentation (geography, industry, etc.) the higher the
licensor’s profit appropriability;
2. Technology accessibility: the more the technology is accessible, the higher the number of
users and consequently, the higher the profit;
XXII
3. Technology applicability: the wider the number of the technology applications, the higher the
profitability from licensing (overcome profit stifling because of spill overs).
4. Technology scalability: the technology ability to change its scale in order to meet growing
demand;
5. Technology adaptability: the technology ability to adapt in order to match specific client’s
needs, easily adjustable or enlargeable, by adding features;
In conclusion, this study would offer an alternative to licensing for companies dealing with General
Purpose Technologies. From the point of view of the Design Driven Innovation literature, the research
would be an initial investigation of the dynamics and methods companies should adopt while steering
a technology. Further researches, should have the objective of generalising the above results to any
type of technology.
1
Table of Contents
1. Introduction 7
1.1. Problem Setting 8
1.2. The Importance of General Purpose Technologies 8
1.3. Technology Commercialisation 9
1.4. Research Structure 11
2. Introduction to Technology Management 13
2.1. Introduction 14
2.2. Technology and Society Relationship in the Innovation Process 14
2.2.1. The Social Construction of Technology (SCOT) 15
2.2.2. Socio-Technical System (STS) 18
2.2.2.1. Innovation Systems: Rules and Dynamics 19
2.2.2.2. Co-Evolution of Technology and Society 24
2.3. The Design Driven Innovation: a Different Perspective 27
2.4. Interpreting and Envisioning New Meanings 37
2.5. Driving a Culture of Innovation Within the Companies’ Boundaries 42
3. Technology Innovation 45
3.1. Introduction 46
3.2. Understanding and Imagining the Future Context 46
3.2.1. Technology Future Analysis (TFA) 47
3.2.1.1. Scenario Planning, Backcasting and Roadmapping 48
3.3. The Problem-Solving Approach 54
3.4. Innovation Strategies 55
3.4.1. The Continuous Product Innovation 56
3.4.2. The Organizational Adaptability 60
3.4.3. The Collaborative Community Model 64
2
3.4.4. Industry Platforms and Ecosystems 68
3.5. From the Company Laboratories to the World Stage 71
4. Technology Integration 75
4.1. Introduction 76
4.2. Technology Integration 76
4.3. Developing Future Products and Services 79
4.3.1. Introduction to New Product Development 79
4.3.2. The Stage-Gate 82
4.3.3. Human-Centered Design 85
4.3.4. Design Thinking 88
4.3.5. Lean Start-up 93
4.3.6. Design Sprint 96
4.4. General Purpose Technology 99
4.4.1. General Purpose Technology Licensing 104
5. Research Methodology 109
5.1. Introduction 110
5.2. Reviewing the Literature 111
5.3. GAP and Research Question 113
5.4. Case Study Selection 115
5.5. Data Gathering 118
5.6. Data Analysis 122
6. Empirical Setting – Watson overview: the Concept and Functionalities 125
6.1. Introduction 126
6.2. IBM Watson Concept 127
6.2.1. Company Transformation 133
6.3. Watson Functioning 135
6.3.1. Learning Watson 140
3
7. Empirical Setting – Watson Journey: From the Laboratories to the Market 143
7.1. Introduction 144
7.2. Watson Commercialization 144
7.2.1. Watson Conferences 145
7.3. Watson Activities 147
7.3.1. Research 149
7.3.2. Partnership 152
7.3.3. University Program 156
7.3.4. Hackathon 158
8. Case Study Analysis – Details and Dynamics of Watson’s Innovation Process 167
8.1. Introduction 168
8.2. Technology Steering 169
8.2.1. Dynamics of the Diverging Phase 173
8.2.2. Characteristics for Implementing Technology Steering 180
8.3. Enabling Factors 182
8.4. Watson Network 187
8.4.1. Partnerships’ Evolution 189
9. Conclusions 193
9.1. Research Objective and Question 194
9.2. Main Research Outcomes 196
9.3. Limits and Follow Up 201
4
Table of Figures
Figure 2.1. – SCOT Framework 17
Figure 2.2. – Socio-Technical Systems’ Dimensions 21
Figure 2.3. – Cycle of Adaptation 26
Figure 2.4. – Radical and Incremental Innovations 30
Figure 2.5. – Innovation Typologies 32
Figure 2.6. – Design-Driven Innovation 32
Figure 2.7. – Technology Epiphany 33
Figure 2.8. – Different Paths Can Unveil Quiescent Meanings 36
Figure 2.9. – Philips AEH 39
Figure 2.10. – Steps of the Technology Epiphany Strategy 41
Figure 2.11 – Design-Driven Network 43
Figure 3.1. – Scenario Cone Showing Multiple Possibilities 49
Figure 3.2. – Examples of Roadmap with Future Technology Alternatives 53
Figure 3.3. – Thomke Experimentation’s Framework 55
Figure 3.4. – The Continuous Innovation Process 58
Figure 3.5. – The Adaptive Cycle Challenges 62
Figure 3.6. – The Community Model 67
Figure 4.1. – Seven Steps of BAH Model 81
Figure 4.2. – The Stage-Gate Model 83
Figure 4.3. – Design-Thinking Spaces 89
Figure 4.4. – The Five Stages of the Design Thinking Process 92
Figure 4.5. – The Lean Start-Up Methodology 95
Figure 4.6. – The Design Sprint Process 98
5
Figure 4.7. – Historical GPTs 103
Figure 5.1. – Methodological Framework 110
Figure 5.2. – Literature Background 113
Figure 5.3. – Case Selection Strategies 116
Figure 5.4. – Sources for Watson Chronicle Evolution 120
Figure 5.5. – Data Analysis Framework for This Research 123
Figure 6.1. – Watson Logo 127
Figure 6.2. – Technology Evolution 128
Figure 6.3. – Interactions Between Bluemix Architecture, Clients and Developers 136
Figure 6.4. – API Development Cycle 139
Figure 7.1. – World of Watson 2016 146
Figure 7.2. – Timeline of the Diffusion Activities Supporting Watson’s Commercialisation 148
Figure 7.3. – IBM Watson Partnership Network in 2015 156
Figure 7.4. – Cognitive Build Phases 165
Figure 8.1. – Company’s Involvement in the Technology Licensing and Steering Process 170
Figure 8.2. – Innovation Funnel 171
Figure 8.3. – Convergent and Divergent Phases 173
Figure 8.4. – Sources for Watson Chronicle Evolution 174
Figure 8.5. – Watson Steering: Application Development in Many Different Fields 175
Figure 8.6. – Contribution of Different Activities to the Development of Application per Field 180
Figure 8.7. – Timeline of the Watson’s Expansion in Different Application Fields 183
Figure 8.8. – Phases of the IBM’s Involvement During the Innovation Process 189
Figure 8.9. – Watson Network Evolution 192
Figure 9.1. – Watson Path to Discover Quiescent Meanings 194
Figure 9.2. – Company’s Involvement in the Technology Licensing and Steering Process 196
6
Figure 9.3. – Technology Steering 197
7
Introduction
1
8
1.1. Problem Setting
This introductory chapter, would like to state the reasons that determined the birth of this research.
The germ of the research was the questioning of how a company can define if an innovative
technology bears more than a single application, and, in the affirmative case, how to find and
commercialise all the possible applications. During the search through the technology innovation
literature, and thanks to a case study analysis, two main themes emerged as worth investigating: the
technology steering, which is the process through which a company finds all the different applications
a technology can power, and the General Purpose Technologies, which are a specific class of
technologies, whose diffusion, is characterised by a disruptive effect on the entire economy, due to
the almost unlimited number of applications they can be integrated into. These two literature streams,
that happen to be, somehow, intertwined, have traced the guidelines of the thesis.
1.2. The Importance of General Purpose Technologies
General Purpose Technologies (GPTs) are technological solutions that are applicable to many
different industries, improve rapidly and form the basis for a wave of complementary innovations in
a number of diverse existing markets, generating an overall economic growth (Bresnahan and
Gambardella, 1998). The reason why GPTs can easily span many different industries, is linked to
their characteristic of being highly general from a technological point of view, and therefore,
adaptable to a widespread range of industrial sectors and commercial applications (Gambardella and
Giarratana, 2013). Another important characteristic of GPTs is their relationship with complementary
products. In fact, from one hand, their pervasiveness, combined with technical solutions from
different sectors and adopted to create new solutions, that act as a platform for subsequent
complementary technological developments (Bresnahan and Trajtenberg, 1995). The complementary
effect can magnify the impact of a GPT so that it drives a global economic growth. On the other hand,
however, until a certain level of complementary products has been developed, the launch of a General
Purpose Technology on the market generates a slowdown of the economy, with workforce and
companies struggling to cope with the disruption.
The presence of General Purpose Technologies can be traced back since the early history, in fact
discoveries that characterised the humankind progress, as the agriculture and the writing, can be
9
classified as GPTs. Some relevant examples of General Purpose Technologies, emerged during the
last century, include the steam engine, nanotechnology and the information and communication
technologies. A good example of the above mentioned GPTs growth process, which is subject to
episodes of sharp acceleration and deceleration is the hiatus of the steam technology in later
nineteenth century (Lipsey, 1998). The accumulation of capital in the form of steam engine was rather
slow. James Watt’s improved engine was patented in 1769. However, before the steam reached parity
the water as source of power it takes more than 60 years. After the parity was reached, steam and
water remained cost effective in many activities, for, at least, other 50 years. Steam power, at that
time, was used in mining and cotton textiles, while other important sectors of the economy were still
relying on water power. One of the reason that defined the delay in the takeover of the market on
behalf of the steam power, was the perfection of the technology, that took a long time to be
accomplished in every sector. Therefore, the growth of the GPT was quite intermittent, being
relatively rapid when the Watt’s engine first appeared on the market, and when it was possible to
switch to higher pressure steam engine, in the mid-nineteenth century. In this historical period, the
spill-over level was high, and it played a relevant role in the GPT’s diffusion (Rosenberg and
Trajtenberg, 2001). These two aspects have important consequences, as they imply that:
1. Improvement of the technology is fundamental to GPTs spread and fast diffusion;
2. Information and knowledge about the technology heavily impact on the degree and speed of
diffusion of a General Purpose Technology.
Eventually, the strongest impact of steam power on productivity growth was felt in the second half
of the nineteenth century, rather than earlier (Crafts, 2003). What the literature about General Purpose
Technology states, is that their impact on economic growth if often very long-delayed (David, 1991).
The delay is caused, mainly, by the time taken to understand the true potential of the technology.
This example, highlights the need of identifying a different managerial approach for GPTs, in order
to accelerate their diffusion process, by acting on the weaknesses of the current solution.
1.3. Technology commercialisation
The commercialisation and diffusion of complex technologies, like GPTs is difficult (Ardito, 2015),
as it is primarily limited by the effort required to adapt them for different industries. Recent literature
describes the most successful commercialisation strategy for GPTs. To launch and diffuse an
emergent GPT in the market, literature identify as most suitable business model, the licensing.
Historically, licensing technology has tended to occur across national boundaries and reflected the
10
geographic limits of the licensor’s market reach. Companies licensed their technologies in market
where they did not have intention to enter. During the ‘80s and the ‘90s, licensing tended to occur
between small technology specialists and large operating companies. The reason that pushed small
market actors to license instead of directly commercialise the technology, is the toughness of the
commercialisation phase. Established companies have deeper experience in interfacing the design,
the distribution, the after-sale services, the marketing etc. However, this licensing strategy was
strongly limited by the fact that the it could involve only a small number of downstream
manufacturers, limiting the profitability of the innovator. To react to this vulnerability, many
technology-based firms started to modify their business model, pursuing a development strategy of
technologies with more general applicability, especially from the ‘90s. The generality of the
technologies aimed at avoiding the problem of being able to licence only to few market specialists,
by providing products that could span multiple markets. The turn through the General Purpose
Technologies allowed innovators to overcome many of the licensing drawbacks. For example, the
licensor is less vulnerable in the one-to-one negotiation with the downstream companies, the overall
profit of the licensor can be increased by expanding the number of applications the general technology
can enable, and, also, as the licensees has to invest in order to modify the general technology to fit
their needs, they are, consequently, more committed to the partnership. However, by analysing the
licensing business model, is evident that some downside still need to be managed. First of all, the
profit of the innovator is still constrained by the success of the downstream licensee, who is
committed only to pay a fixed fee. Second, the business model is highly tight with the companies’
relative size, upon which depends the bargaining power: the bigger the company, the higher the
bargaining power. The profit and commercialisation success of small firms relying on licensing their
innovative technologies are still stifled by the variables just presented.
In addition to the above presented difficulties related to the successful commercialisation of the
General Purpose Technology, the Design Driven Innovation literature has highlighted a further area
of the commercialisation strategy where, often, innovative companies tend to fail. Incremental
innovation is not a great problem for established companies, however, the attention of managers to
incremental innovation came at a price. In fact, looking at the technology in terms of existing features
and performances will inevitably lead to a simple technology substitution or to screen it off as not
useful. However, technologies have many hidden potential opportunities within them, and the first
step is to make them appear (Proni, 2007). If companies continue to look at the technology innovation
only trough in a substitutional perspective, they fail in seize the full technology potential. Verganti,
in 2009, proposed that to move from an incremental, substitutional innovation, to a radical one,
companies should focus on understanding the meanings the technologies bear in different contexts,
11
to different users. The process of unveiling a quiescent technology meaning is called Technology
Epiphany, and when it targets the entire range of applications the technology can enable, it is defined
as Technology Steering. By steering a developed technology, a company is able to launch on the
market a higher number of applications, increasing the innovation’s profitability. The necessity of a
technology steering strategy is even more compelling while considering the commercialisation of
General Purpose Technologies, that, by definition, can foster a high number of applications.
Through the analysis of a case study, this thesis would like to identify a strategy that can jointly
answer to these two emerging needs: the definition of a business model for General Purpose
Technologies which, on the contrary of licensing, allow the innovators to overcome the stifling of
their profits, and, the delineation of a strategy that enable companies to seize the technology full
potential, by steering it to uncover all the hidden meanings and increasing the commercial
applications.
1.4. Research Structure
The thesis is structured as follow:
The firsts three chapters provide an overview of the most relevant literature contribution regarding
technology innovation management, which is further divided into the phases of the innovation process
(selection, development and integration), and an analysis of the specific features of General Purpose
Technologies. Following the literature review, the research methodology is presented.
The case study is presented through two chapters: the first one offers a general overview about the
technology, like for example, how it was conceived and what are its main functionalities; in the
second chapter instead, the case study is deepened and specific features relating to the
commercialisation strategy emerges. It follows an in-depth analysis of the mechanisms highlighted
in the description of the case and, whenever possible, a comparison with the current literature theories
is carried out. The conclusion of the analysis and the eventual answer to the research question are
then summarised in the last chapter.
12
13
Introduction to
Technology Management
2
14
2.1. Introduction
Technology is a Greek word derived from the synthesis of two words: techne (meaning art) and logos
(meaning logic or science). So loosely interpreted, technology means the art of logic or the art of
scientific discipline. Formally, it has been defined by Everett M. Rogers as "a design for instrumental
action that reduces the uncertainty in the cause-effect relationships involved in achieving a desired
outcome". That is, technology encompasses both tangible products, such as the computer, and
knowledge about processes and methods, such as the technology of mass production introduced by
Henry Ford and others. Professor Michael Porter of Harvard Business School is one of many business
analysts who believe that technology is one of the most significant forces affecting business
competition. In his book Competitive Advantage (1985), Porter noted that technology has the
potential to change the structure of existing industries and to create new industries. It is also a great
equalizer, undermining the competitive advantages of market leaders and enabling new companies to
take leadership away from existing firms. Since technology is such a vital force, the field of
technology management has emerged to address the particular ways in which companies should
approach the use of technology in business strategies and operations. Technology is inherently
difficult to manage because it is constantly changing, often in ways that cannot be predicted.
Technology management is the set of policies and practices that leverage technologies to build,
maintain, and enhance the competitive advantage of the firm on the basis of proprietary knowledge
and know-how.
This chapter presents important concepts of technology management, in particular regarding the
management of technology innovation and its relationship with the social context in which the
innovation take place. The second part of the chapter, describes some recent discover related to the
innovation process, the design-driven innovation theory, and the role the external actors play in it.
2.2. Technology and Society Relationship in the Innovation Process
Schumpeter said: “It is not enough to produce satisfactory soap, it was also necessary to induce people
to wash”. This metaphor still applies to the present day since it raises the issue of the social
construction of usage of the invention, which is the specific feature of innovation.
15
The concept of innovation is usually restricted to the technology or technical field. Until the 1990s,
scarcely anybody talked about social innovation except, in certain cases, to refer to the likely effect
of society on the emergence of technical innovation.
This approach to innovation is at the centre of the technological determinism theory which assumes
that a society's technology determines the development of its social structure and cultural values. The
term is believed to have originated from Thorstein Veblen (1857–1929), an American sociologist and
economist.
Technological determinism seeks to show technical developments, media, or technology as a whole,
as the key mover in history and social change. It is, '... the belief that social progress is driven by
technological innovation, which in turn follows an "inevitable" course.' (Michael L. Smith).
However, innovation of every kind is strongly characterized by a social dimension.
Scepticism about technological determinism emerged alongside increased pessimism about techno-
science in the mid-20th century, in particular around the use of nuclear energy in the production of
nuclear weapons and the problems of economic development in the third world. As a direct
consequence, desire for greater control of the course of development of technology gave rise to
disenchantment with the model of technological determinism in academia.
Modern theorists of technology and society no longer consider technological determinism to be a very
accurate view of the way in which we interact with technology. Prominent opposition to
technologically determinist thinking has emerged within work on the social construction of
technology (SCOT). SCOT research, such as that of Mackenzie and Wajcman (1997) argues that the
path of innovation and its social consequences are strongly, if not entirely shaped by society itself
through the influence of culture, politics, economic arrangements, regulatory mechanisms and the
like.
2.2.1. The Social Construction of Technology (SCOT)
The SCOT method was introduced in 1984 by Bijker and Pinch. It has grown out of the tenets of
social constructivism and sociology of scientific knowledge. Social constructivism holds that
knowledge is a social construction, not an ultimate truth. As such, knowledge can be interpreted in
different ways. Moving from these assumptions, Bijker and Pinch argue that technology does not
16
determine human action, but that rather, human action shapes technology. They also argue that the
ways a technology is used cannot be understood without understanding how that technology is
embedded in its social context.
SCOT pioneered a new way to examine the social context of technological innovation. In contrast to
the linear model of technological innovation, which imagines a mythical, linear succession of basic
science, applied science, development, and commercialization (Madhjoudi, 1997), SCOT sees a
variety of groups (called relevant social groups) competing to control a design, which at this point is
far from preordained (SCOT calls this the phase of interpretive flexibility). Each group has its own
idea of the problem that the new artefact is supposed to solve and, in consequence, favours a
distinctive technological design, including components and operational principles that may not be
favoured by competing groups. This concept is justified by the social constructivism on which the
theory is based, that holds that people attach meanings or interpretations to artefacts and uses these
meanings to direct the technological development. In a process called stabilization, one social group
prevails over the others, so that group's design prevails and the others are forgotten (Pinch and Bijker,
1984), or two or more groups negotiate a compromise (Bijker, 1996).
SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology
should look to the social world. It is not enough, according to SCOT, to explain a technology's success
by saying that it is "the best", researchers should look at how the criteria of being "the best" is defined
and what groups and stakeholders participate in defining it.
SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow
when one wants to analyse the causes of technological failures or successes.
Core concepts of the SCOT methodology are:
1. Interpretative flexibility: it means that each technological artefact has different meanings and
interpretations for various groups. Technology design is seen as an open process that can
produce different outcomes depending on the social circumstances of development;
2. Relevant social groups: the most basic relevant groups are the users and the producers of the
technological artefact, but most often many subgroups can be delineated (e.g. users with
different socioeconomic status, competing producers, etc.). Sometimes there are relevant
groups who are neither users, nor producers of the technology, for example, journalists,
politicians, and civil organizations. Relevant social groups are the embodiments of particular
interpretations: “all members of a certain social group share the same set of meanings,
attached to a specific artefact” (Pinch and Bijker 1987);
17
3. Design flexibility: as technologies have different meanings in different social groups, there are
always multiple ways of constructing technologies. A design is only a single point in the large
field of technical possibilities, reflecting the interpretations of certain relevant groups;
4. Problems and conflict: different interpretations often give rise to conflicts between criteria
that are hard to resolve technologically, or conflicts between the relevant groups. Different
groups in different societies construct different problems, leading to different designs. The
first stage of the SCOT research methodology is to reconstruct the alternative interpretations
of the technology, analyse the problems and conflicts these interpretations give rise to, and
connect them to the design features of the technological artefacts. The relations between
groups, problems, and designs can be visualized in diagrams.
5. Closure and stabilization: after a period of time, as technologies are developed, the
interpretative and design flexibility collapse through closure mechanisms. Examples of
closure mechanisms are:
a. Rhetorical closure: when social groups see the problem as being solved, the need for
alternative designs diminishes. This is often the result of advertising;
b. Redefinition of the problem: a design standing in the focus of conflicts can be
stabilized by inventing a new problem, which is solved by this very design.
Closure is not permanent. New social groups may form and reintroduce interpretative
cycles. In the 1890s automobiles were seen as the "green" alternative, a cleaner
environmentally-friendly technology, to horse-powered vehicles but by the 1960s, new
social groups had introduced new interpretations about the environmental effects of the
automobile, eliciting the opposite conclusion.
Figure 2.1 – SCOT Framework (Pinch, Bijker, 1987)
18
During the development of the literature regarding the relationship between technological innovation
and society, the SCOT methodology has been considered incomplete and overly narrow by many
researchers (Russell, 1986, Winner, 1993). Reasons for criticisms are built around the methodology’
structure as elaborated by Bijker (1995). It consists of two main parts or methodological rules:
1. To identify the set of relevant social groups, one should “roll a snowball”: the researcher
interviews a few actors at the start, asking them to identify relevant groups, and in this way
eventually builds up the set of all groups. This snowball method is no guarantee of accuracy
or comprehensiveness and may introduce its own distortions. Another problem is that some
relevant social groups may be excluded from participation and their absence may go
unnoticed;
2. Researchers must “follow the actors.” in order to learn with them how they establish new
associations (Latour, 1987). Central to this technique is the idea that the only categories and
lines of social demarcation of importance are those consciously recognized by the actors
(Bijker 1995). It focuses on how the immediate needs, interests, problems and solutions of
chosen social groups influence technological choice, but disregards any possible deeper
cultural, intellectual or economic origins of social choices concerning technology.
Besides criticisms, the SCOT model marks a break with the past concept of science, and technology
development. For the first time, the technology trajectory was not seen as an exogenous factor
depending only on its own nature, but instead, as social construct which depends on many social
factors and relevant social groups. The idea that the social shapes science has been completely
unexplored until then, and the role of society in determine the success or the failure of an innovation
brought a new and important perspective.
2.2.2. Socio Technical Systems (STS)
Socio-Technical Systems (STS) theory is rooted in principles that have their origin in action research
field projects undertaken by the Tavistock Institute of Human Relations in British coal mines during
post war reconstruction of industry (Trist, 1950). The general aim was to investigate the organisation
of work, and see whether it could be made more humanistic. In other words, the intention was to
move away from the mechanistic view of work encompassed by Taylor‘s (1911) principles of
scientific management, which largely relied on the specialisation of work and the division of labour.
At the time when the study took place, mine productivity was failing to increase despite major
19
investments in increased mechanization, while labour turnover and absenteeism were on a rapid rise.
In the course of this research, Tavistock social scientists discovered work practice and organization
innovations made by local coal mine management and workers who had evolved a way of working
at a high level of mechanization, which recovered the work group cohesion and self-regulation that
had existed in the pre-mechanized era (Trist et al., 1963).
Separate approaches to the social and technical dimensions of an organization were not seen as
sufficient to determine innovation.
The term “socio-technical systems” was originally coined by Emery and Trist (1960) to describe
systems that involve a complex interaction between humans, machines and the environmental aspects
of the work system. In fact, the theory has at its core the idea that the design and performance of any
organisational system can only be understood and improved if both ‘social’ and ‘technical’ aspects,
the substantive factors, are brought together and treated as interdependent parts of a complex system.
The economic performance and human outcomes depend upon the “goodness of fit” between these
factors within a work organization as an open “socio-technical system” (Emery, 1959; 1972).
Nowadays, this interaction among the three above mentioned factors is true of most systems in the
workplace. The consequence of this definition is that all of these factors (people, machines and
context) need to be considered at the same time when developing such systems using socio-technical
system design (STSD) methods, in fact, organisational change programmes often fail because they
are too focused on one aspect of the system, commonly technology, and fail to analyse and understand
the complex interdependencies that exist.
Even though early work in STSD focused mostly on manufacturing and production industries such
as coal, textiles, and petrochemicals (Mumford, 2006), the methodology has proven to be of interest
also to meet the challenges of the 21st century, when all businesses are surrounded by powerful
economic climates that greatly affect the way they operate and have strong cultures that have
developed over the years which are difficult to change.
2.2.2.1. Innovation Systems: Rules and Dynamics
The concept of the innovation system stresses that the flow of technology and information among
people, enterprises, and institutions is key to an innovative process. It contains the interactions
between the actors needed in order to turn an idea into a process, product, or service on the market.
20
A technological innovation system can be defined as ‘a dynamic network of agents interacting in a
specific economic/industrial area under a particular institutional infrastructure and involved in the
generation, diffusion, and utilization of technology’ (Carlsson, Stankiewicz).
Taking the socio-technical systems perspective, on the one hand, they are maintained and changed by
activities of actors, on the other hand, they form a context for actions. It is possible to interpret these
actions as moves in a game, of which the rules somewhat alter while the game is being played: the
different social groups have their own perceptions, preferences, aims, strategies, resources, etc.
Actors within these groups act to achieve their aims, increase their resource positions, etc. Their
actions and interactions can be seen as an ongoing game in which they react to each other. These
actions maintain or change aspects of socio-technical systems. The dynamic is game-like because
actors react to each other’s moves. These games may be within groups, e.g. firms who play strategic
games between each other to gain competitive advantage. There may also be games between groups,
e.g. between an industry and public authorities. The ongoing games within and between groups lead
to changes in socio-technical systems, because the moves actors make have effects. Moves may lead
to improvements of existing technologies or introduction of new technologies. The consequence of
these multiple games is that elements of socio-technical systems co-evolve. There is not just one kind
of dynamic in socio-technical systems, but multiple dynamics which interact with each other.
One of the key tenets of STSD is a focus on participatory methods, where end users are involved
during the design process and ethnographic approaches to design.
In understanding how transitions from one system to another happen, it is useful to adopt the multi-
level perspective (MLP), developed by Rip and Kemp (1998) and theoretically elaborated by Geels
and others. The multi-level perspective was originally developed to understand transitions and regime
shifts. The basic ontology behind the multi-level perspective stems from the sociology of technology
where three interrelated dimensions are important:
1. socio-technical systems, the tangible elements needed to fulfil societal functions;
2. social groups who maintain and refine the elements of socio-technical systems;
3. rules (understood as regimes) that guide and orient activities of social groups.
Stability of existing socio-technical systems is based on the relationship and the interactions among
these three dimensions:
1. Socio-technical systems, in particular the artefacts and material networks, have a certain
‘hardness’, which makes them difficult to change. Once certain material structures or technical
21
systems, they are not easily abandoned, and almost acquire a logic of their own (Walker,
2000). Complementarities between components and sub-systems are an important source of
inertia in complex technologies and systems (Rycroft and Kash, 2002; Arthur, 1988). These
components and sub-systems depend on each other for their functioning. This system
interdependence is a powerful obstacle for the emergence and incorporation of radical
innovations;
2. Rules and regimes provide stability, as they tend to be reproduced. As long as actors (e.g.
firms) expect that certain problems can be solved within the existing regime, they will not
invest in radical innovations and continue along existing paths and ‘technical trajectories’
(Dosi, 1982). As long as firms think that they meet user preferences well, they will continue
to produce similar products (Christensen, 1997);
3. Actors and organisations are embedded in interdependent networks and mutual dependencies
which contribute to stability. In organisation studies, it has been found that organisations (e.g.
firms) are resistant to major changes, because they develop “webs of interdependent
relationships with buyers, suppliers, and financial backers and patterns of culture, norms and
ideology” (Tushman and Romanelli, 1985)
Because of path dependence and stability, it is difficult to create radical innovations within socio-
technical systems. The multilevel perspective explains transitions from one system to another
distinguishing three conceptual levels: a “niche” level in which radical novelties emerge, a “regime”
level that refers to cognitive rules shared in social networks related to the existing system, and a
“landscape” level that refers to exogenous developments. The main point is that transitions come
about through alignments between processes at these different levels.
To understand transitions from one system to another the notions of tensions and mis-alignment are
useful. The different regimes have internal dynamics, which generate fluctuations and variations.
Figure 2.2 – Socio-Technical Systems’ Dimensions (Geels, Kemp, 2007)
22
When the activities of different social groups and the resulting trajectories go in different directions,
this leads to ‘mis-alignment’ and instability of socio-technical systems.
The tensions and mis-matches of activities are mirrored in socio-technical regimes, in the form of
tensions or mis-matches between certain rules, creating more space for interpretative flexibility for
actors. As long as socio-technical regimes are stable and aligned, radical novelties have few chances
and remain stuck in particular niches. If tensions and mis-matches occur, however, in the activities of
social groups and in ST-regimes, this creates ‘windows of opportunity’ for the breakthrough of radical
novelties.
The perspective enables a systematic distinction between three kinds of change processes:
reproduction, transformation and transition:
1. Reproduction: in this change process, there are only dynamics at the regime level, not at the
landscape and niche level. The existing socio-technical system and regime form a stable
context for action of social groups. There is an incremental and cumulative change along
trajectories. This is the normal situation at the regime level. In reproduction, there is no
feedback from outcomes to regime or environment. Rules of the existing regime are
reproduced precisely, and innovation is absent. This process is hardly relevant for
technological change. In cumulative innovation, outcomes are fed back to the system of
interaction, leading to minor rules changes in the regime and incremental innovation;
2. Transformation: in this change process, there are interacting dynamics at the regime and
landscape level, but little influence from niches. The basic mechanism is that change at the
landscape level create pressure on the regime, leading to re-orientation of the direction of
innovative activities. The adjustment and re-orientation to external landscape pressure happen
through negotiations, power struggles and shifting coalitions of actors. Because incumbent
regime actors initially tend to downplay the need for transformation, the role of outsiders can
be fundamental. Transformation means that the new grows out of the old. The existing
technical system is not replaced but changed from within through cumulative adjustments in
new directions. To further conceptualize this transformation process, Van de Poel’s (2000)
sociological framework is used. This framework comprises three elements:
a) the system of interaction, which refers to the social network and the rules that guide
activities of regime insiders;
b) outcomes produced by regime actors, for example, technical artefacts and environmental
effects;
23
c) the environment, which is made up of outsiders, that is, actors excluded from the regime.
Different feedbacks between these elements result in three types of change processes:
reproduction, cumulative innovation, and transformation.
In transformation, outcomes are feedback to the system of interaction and to outsiders. When
outsiders are concerned about negative outcomes, they voice criticism. This feedback from
outsiders may lead to substantial changes in regime rules and reorientation of the innovation
trajectory major changes as outcome of alignments between processes at three different levels
(niche, regime, landscape).
3. Transition: a transition refers to a shift from one socio-technical system to another. It is not
about a re-orientation of an existing trajectory, but about a shift to a new trajectory. In a
transitions process, there are interactions between dynamics at landscape, regime and niche
levels. Landscape developments create pressure on the regime, leading to major problems.
Regime actors react with adjustments in the system, but they are not able to solve the
problems. This create a window of opportunity for new innovations, developed in niches and
carried by a new network of social groups. Transitions are understood as changes from one
socio-technical system to another, involving co-evolution of technology and society.
Literature on strategic niche management (Schot, Hoogma, Kemp, 2008) distinguishes three
important niche-internal processes:
1. Learning and co-construction processes are important on several dimensions, they help to
create a working configuration;
2. Building of social networks and constituencies that support the new innovation and invest in
its further development;
3. Articulation of visions and expectations to provide an orientation towards the future and give
direction to learning processes.
It is often noted in case studies (e.g transition from surface water to piped water in the Netherlands,
by F. Geels, 2008), that the main difficulty is not the development of the technology but its
introduction in a large technical system.
Very often, due to the interdependency between the evolution of technology and society, the transition
from a socio-technical system to another does not happen through a rational goal-oriented process
following three phases: problem articulation, search for solutions, implementation of a solution, but
rather a non-linear process, depending on external developments and changing linkages between
problems and solutions.
24
Some scholars in sociology of technology and evolutionary economics have highlighted the
importance of niches as the locus of radical innovations. As the performance of radical novelties is
initially low, they emerge in ‘protected spaces’ to shield them from mainstream market selection (e.g.
subsidies from government). Niches act as ‘incubation rooms’ for radical novelties.
In niches, not all rules have yet crystallised. There may be substantial uncertainty about the best
design heuristics, user preferences, behavioural patterns, public policies, etc.
As the rules are less clear, there is less structuration of activities, there is more space to go in different
directions and try out variety. The work in niches is often geared to the problems of existing regimes.
Radical novelties may have a ‘mis-match’ with the existing regime (Freeman and Perez, 1988) and
do not easily break through. Nevertheless, niches are crucial for system innovations, because they
provide the seeds for change.
2.2.2.2. Co-Evolution of Technology and Society
Evolutionary economics, business studies and innovation studies tend to focus mainly on the
production-side and the creation of knowledge and innovation (e.g. learning within firms,
organisational routines, knowledge management), while the user side has received less attention.
Recently, there has been somewhat more attention in innovation studies for the co-evolution of
technologies and markets (Green, 1992; Coombs et al., 2001).
Studying the co-evolution is useful in understanding innovations at broader aggregation levels and
longer time-scales.
The co-evolution supporters argue that users have to integrate new technologies in their practices,
organisations and routines, something which involves learning, adjustments. New technologies have
to be ‘tamed’ to fit in concrete routines and application contexts (including existing artefacts). Such
domestication involves symbolic work, practical work, in which users integrate the artefact in their
user practices, and cognitive work, which includes learning about the artefact (Lie and Sørensen,
1996). Domestication studies open up the ‘black box’ of adoption. Adoption is no passive act, but
requires adaptations and innovations in the user context.
The advantage of looking explicitly at socio-technical systems is that the co-evolution of technology
and society, of form and function becomes the focus of attention. Dynamics in socio-technical
25
systems involve a dynamic process of mutual adaptations and feedbacks between technology and user
environment.
Mutual adaptation of technology and organisation.
New production technologies are widely recognised as competitive weapons. However, technology
implementation and transfer, are as much challenging a managerial problem as their invention. The
initial implementation stage, that is, the period during which the technology is first removed from its
laboratory setting and introduced into the user environment is especially critical. To succeed it is
required continuous, ongoing dedication to the process of change and the conscious management of
mutual adaptation because the technology will never exactly fit the user environment: there is always
the need for a carefully managed experimental introduction into the user environment with the intent
to learn.
Innovation diffusers have noted the phenomenon of “re-invention”, the alteration of the original in-
novation as users change it to suit their needs or use it in ways unforeseen by developers.
Whereas Evan noted the general tendency for organizational adjustment to lag behind technological
change, Ettlie found that better performing organizations synchronize the adaptation of administrative
policies with the introduction of the technology.
The poor fits of the new technology into the user environment takes the form of misalignments
between the technology and: (a) technical requirements, (b) the system through which the technology
is delivered to users, or (c) user organization performance criteria. These misalignments must be
addressed if the implementation is to succeed. Second, these misalignments can be corrected by
altering the technology or changing the environment - or both. These alterations (termed “cycles of
adaptation”) vary in magnitude, both for the technology and the user environment, and elicit different
levels of effort and resources.
The adaptation process is one of circling back to revisit a decision point: reopening issues of technical
design that the developers assumed were resolved, redesigning delivery systems in the user
environment or unfreezing organizational routine to re-examine the goals implied by current
performance criteria. These adaptive cycles vary in magnitude, depending upon how fundamental is
the change to be made. In the case of technology adaptation, a large cycle would mean that the
developers return to the drawing boards, whereas a small cycle would entail a shift very low in the
“design hierarchy” , that is, a minor adaptation.
26
Developers should attempt to understand and simulate the user environment. The better the original
definition process, the less disruptive and costly the adaptation cycles (Leonard-Barton, 1988).
Cycles of adaptation:
- Large cycles: can be pathologies or opportunities. The expense of having a technology totally
rejected after reaching the point of introduction into user operations, necessitating a return to
the drawing boards, is one few organizations care to bear. The prospect of altering
performance criteria from top to bottom in an organisation, in order to exploit a new
technology is equally daunting. However research on survival in highly competitive industries
suggests that the surviving companies are those that are open to advances in process
technology, even if the price of that openness is expensive technical experimentation and
costly organisational shifts;
- Small cycles: they also can be viewed as problems or opportunities. When technology transfer
is regarded as a sort of problem solving process which involve both the technology and the
user in mutual collaboration, then the process is one of negotiating toward mutual benefit.
Figure 2.3 – Cycle of Adaptation (Leonard-Barton, 1988)
27
Negotiation, researchers have found, is more likely to succeed if reasonable goals are set and
if the negotiators frame the issue in terms of expected gains instead of expected costs.
The range of managerial options for achieving successful technology transfer includes changes in the
user environment as well as in the technology itself and frequently the same misalignment can be
addressed either way. A major proposition implied by this framework is that change in both
technology and user environment is more beneficial than holding one constant and changing the other.
Thus, the success of technology transfer depends also on the degree to which both developers and
users want to make the transfer succeed. The will to make it succeed is more likely to be present if
both sides of the transfer start with the premise that they are co-creating change that will benefit both
sides.
As Van de Ven notes “Innovations not only adapt to existing organizational and industrial
arrangements, but they also transform the structure and practice of these environments”.
2.3. The Design Driven Innovation: a Different Perspective
Literature regarding innovation processes and dynamics has evolved over time, integrating in the
initial perspective, focused solely on the technology, the social dimension and the users influence in
the development and success of an innovation.
During the last decades of the ‘900, scholars and practitioners investigating on innovation studies
identified two different perspectives: the role of technology in developing innovations and the
understanding of customer needs which determines the goodness and fitting of an innovation in the
market. Summarizing the literature, since the 80s, one used to define two different approaches, the
first pointing to market forces as the main determinants of technical change (the so-called Market-
pull) and the second defining technology as an autonomous or quasi-autonomous factor, at least in
the short run (Technology push) (Dosi,1982). After years of debates, the 1970s, researchers finally
came to the conclusion that both were important for innovation and the development of
technologies (Mowery, Rosenberg, 1979). The debate gave rise to sociological and economic
approaches, in which technological development was conceived as an interaction process
between societal, economic, political and technological factors.
Recent studies in the innovation field has brought a third focal dimension to the traditional innovation
literature: design. This new dimension moves from the intuition that sometimes companies fail
in fully exploiting the opportunities provided by the emerging technologies because they see them
28
myopically, only as a substitute for previous technologies, without exploring the entire range of
applications the technology might enable. A quote that perfectly express this concept is the one of G.
Proni: “Technologies offer opportunities which are of course not infinite, but are greater in number
than those imagined by early developers”. Back in the 1980s, marketing ‘guru’ Philip Kotler
already claimed that design’s importance for a company’s competitiveness is evident, and
nowadays, design is increasingly considered an important strategic resource for companies that
want to innovate. New theories and managerial models that support companies to integrate the
design perspective in their strategy have recently made appearance in the innovation research
landscape.
One of the biggest contribution to this literature field, lie in the design-driven innovation theory
(Verganti 2008), which was able to connect the dots among the different research streams around the
role of design in the innovation process and to suggests companies a way to create a sustainable
advantage through innovation.
An important contribution to the birth of the design-driven innovation can be found in the design
management literature. It is difficult to precisely trace the history of design management. Even though
design management as an expression has been first mentioned in literature in 1964, the theory has
evolved in the following decades, assuming progressively a growing importance among the
managerial practices.
Design management is nowadays intended as a business discipline that uses project management,
design, strategy, and supply chain techniques to control a creative process, support a culture of
creativity, and build a structure and organization for design. The objective of design management is
to develop and maintain an efficient business environment in which an organization can achieve its
strategic and mission goals (e.g. innovate) through design. In 2003, the UK Department of Trade &
Industry published the report ‘Competing in the Global Economy: The Innovation Challenge’. This
report lists three important conclusions regarding the influence of design on innovation (Borja de
Mozota, 2003):
1. Research shows that design skills are vital to innovation and can significantly enhance a
company’s financial performance;
2. Unfortunately, not enough businesses use design to connect new ideas with market
opportunities, and lack of design ingenuity usually indicates static or poor overall business
performance;
3. The most successful and imaginative companies use design to enable innovation.
29
Another important step in the creation of the design-driven innovation theory has been the in-depth
study, conducted by Verganti, of some of the most successful Italian design companies as Alessi,
Artemide and Kartell.
The study highlighted the unique approach with which successful Italian manufacturers are
involving designers in their innovation process, and, as a consequence of the success of this approach,
also in industry historically far from the design issues, designers are moving from their traditional
roles in the development process, in which they chiefly address issues of styling and ergonomics, to
a more creative contribution in generating new product concepts. Their input
ranges from product and process engineering to field support in understanding customer needs; from
brand design to strategic consulting most designers know, the appearance of a product is just one of
several ways in which it expresses a message to the user.
Apart from styling, what matters to the user (in addition to the product’s actual functionality) is
the product’s emotional and symbolic value: its meaning. If functionality aims at satisfying the
operative needs of the customer, the product’s meaning tickles her or his affective and socio-
cultural needs.
There are two ways that companies can innovate, radically or incrementally. The major difference
captured by the labels radical and incremental is the degree of novel technological process content
embodied in the innovation and hence, the degree of new knowledge embedded in the innovation.
The distinction between radical and incremental innovations, then, is not one of hard and fast
categories. Instead, there is a continuum of innovations that range from radical to incremental (Hage
1980).
Most innovations simply build on what is already there, requiring modifications to existing functions
and practices. This kind of innovation is called incremental innovation. It implies minor
improvements or simple adjustments in current technology (Munson and Pelz 1979).
The advantages of the incremental innovation process are threefold:
30
1. Staying competitive: every next generation product needs to compete, it’s a must. Products
need to evolve to allow competition with the previous generation to roll on.
2. Ideas are easier to sell: you are offering a recognisable product to an existing market, therefore
it makes it so much easier to communicate and sell your big idea.
3. Affordability: the process of incremental production allows for affordable development.
Products can be made better without breaking the bank.
Incremental innovation is well served in the literature and practice, in particular by the Human
Centered Design theory and its many variants serve this process well, as do any number of market
and technology-driven processes.
Other types of innovations instead, change the
entire order of things, making obsolete the old ways
(Van de Ven et al., 1999). These forms of
innovation are called radical innovations and cause
a technological discontinuity that can enable the
development of completely different applications.
Radical innovations encompass higher order
innovations that serve to create new industries,
products, or markets (Herbig, 1994; Meyer, Brooks,
Goes, 1990). They comprise technological
advances so significant that no increase in scale,
efficiency, or design can make older technologies
competitive (Tushman & Anderson, 1986). They
make obsolete the old, and permit entire industries and markets to emerge, transform, or disappear
(Kaplan, 1999). Scholars have studied in depth technology cycles, defining technological
discontinuities as the trigger period of technological and competitive ferment that can change the
innovation path proposing a new dominant design (Tushman & Anderson, 1990). Radical innovations
are often characterized as disruptive or competence destroying, or as breakthrough, with all these
labels sharing the same concept: radical innovation implies a discontinuity with the past, a clear
departure from existing practice (Duchesneau, Cohn, and Dutton 1979; Ettlie 1983).
Figure 2.4 – Radical and Incremental Innovations
(Orcik, 2013)
31
What’s the reward of radical innovation?
1. Bigger wins: the chance of getting a ‘bigger win’ is one of the main advantages of radical
innovation.
2. Ownability: with an entirely innovative idea comes the chance to create a whole new brand
and market, a market so untapped that a single design could gain a monopoly.
3. More open to new players: the radical model suits new players far better, as they have no
incumbent history which can restrain the breadth of their innovative design. They have a
blank, limitless canvas.
Because of the characteristics above mentioned, much of the writing on innovation in the design and
management communities focuses on radical innovation.
Leaders of established companies acknowledge that radical innovation is critical to their long-term
growth and renewal. The general understanding to be gained from these works is that becoming lean
and mean can make you competitive, and incremental innovation can keep you competitive with
current product platforms. But only radical innovation can change the game. Unfortunately,
recognizing the importance of radical innovations and successfully developing and commercializing
them are two different things. Incremental innovation is not a great problem for established
companies, however, the attention of managers to incremental innovation came at a price. It
diminished the focus and capacity of America's largest companies to engage in truly breakthrough
innovation. The negative consequences of too much attention to incremental innovation have been
recognized by many business scholars. James Utterback and Clayton Christensen, among others, have
noted how firms that dominate one generation of technology often fail to maintain leadership in the
next. Either through hubris or a lack of inspiration or capability, industry leaders continue investing
in the technologies that made them successful, even when more effective technologies, "disruptive
technologies," as Christensen calls them, appear on the horizon. Of course, not every organization
feels compelled to pursue new markets and customers through radical innovation. Doing so, some
will tell, is risky and cannot be relied on to produce results. They are right on both counts. Attempts
at radical innovation produce more failures than successes, and the magnitude and timing of results
are highly unpredictable. Though thoughtful executives recognize the importance of radical
innovation, few are familiar with the process through which it emerges.
Dahlin and Behrens suggest three criteria for identifying an innovation as radical:
1. The invention must be novel: it needs to be dissimilar from prior inventions;
32
2. The invention must be unique: it needs to be dissimilar from current inventions;
3. The invention must be adopted: it needs to influence the content of future inventions, this
happens only if the sociological, market and cultural forces are in alignment, otherwise, even
a brilliant idea could fail.
Considering how fundamental is for companies to innovate and how desirable and powerful radical
innovations are comparing to the much more achievable incremental innovations, research has
explored how radical innovations are produced. Literature traditionally puts at the core of radical
innovation the technology, and a huge and disruptive technological shift is the only possible option
to obtain a radical innovation.
However, Verganti demonstrated that radical innovation could also come about through changes in
meaning. He introduced a new way to look at innovation through two dimensions: technology and
meaning change.
Figure 2.5 – Innovation Typologies (Henderson, Kim, 1990)
Figure 2.6 – Design-Driven Innovation (Verganti, 2009)
33
Design-driven innovation is based on the idea that each product holds a particular meaning for
consumers and Verganti rooted his investigation in definition of design given by Krippendorf and
Heskett: “The etymology of design goes back to the Latin de-signare and means making something,
distinguishing it by a sign, giving it significance, designating its relation to other things, owners,
users, or gods. Based on this original meaning, one could say: Design is making sense (of things)”.
Usually, theories of the management of innovation assume that design becomes relevant in the mature
stages of industries (if ever). However, recent evidence shows that the radical innovation of product
meanings is a key factor in the beginning stages of an industry’s development, when technology is
still fluid. By acting only on the semantic dimension, it is possible to introduce incremental design-
driven innovations, while companies that interpret technology as an enabler of new product meanings
mix research activities related to new technologies and studies about emerging lifestyles and societal
values in order to introduce radical design-driven innovations
More specifically, these companies combine the identification of innovative meanings with research
on new materials, surface treatments, engineering processes, etc., that can be embedded into new
products. Like technological research, design research involves the exploration of new languages
embedded in artefacts; consequently, it implies playing with new technologies and new materials.
The process of radically innovating in meanings begins by researching sociocultural phenomena to
go beyond dominant interpretations (Verganti, 2009) or challenging the status-quo. Analogously, new
technological paradigms result from the search and selection of new directions of technical change
(Dosi, 1985). Verganti has defined these radical design-driven innovations, Technology Epiphanies,
meaning that each technology embeds a set of disruptive new meanings that are waiting to be
Figure 2.7 – Technology Epiphany (Verganti, 2009)
34
uncovered. Only revealing those quiescent meanings, a company can seize the technology’s full
value.
Technology epiphanies highlight that the discovery of other potential applications, enabled by
technological discontinuities, requires envisioning new meanings based on completely new
performances, therefore, incomparable to the current applications.
Case study – Swatch
The technology of quartz movements for watches introduced in the late ‘70s. When quartz
movements for watches were invented, Japanese pioneering firms substituted them for the old
mechanical movements, but it was Swatch that eventually led the competition by realizing that
cheap movements allowed to redefine the meaning of watches: not timekeeping instruments, but
fashion accessories that could be owned in multiple exemplars. It is interesting to note that
customers were not asking for fashion watches but within ten years from the launch, the Swatch
Group became the world’s leading manufacturer of watches.
From a methodological point of view, to effectively identify the less obvious meanings that a new
technology can support, managers should combine research activities about product technologies with
socio-cultural analysis and lifestyle scanning.
Literature confers a major role to design in unleashing the full technology potential (technology
epiphany), in particular, when technology discontinuities arise. Even though, a technology
discontinuity embeds many potential meanings, short-sighted companies frequently approach the
innovation following two myopic behaviours: focus on searching new markets for the technology or
adopt the new technology as a mere substitute. As a consequence of these behaviours, it often takes
years or decades before a company eventually discovers the quiescent meanings embedded in the
technology. Radical innovation driven by meaning change can also be design-driven through a better
understanding of potential patterns of meanings. This understanding can emerge through research and
observations rooted in more general socio-cultural changes, as an understanding of how society and
culture are changing. The search for new, breakthrough meaning must avoid becoming trapped by
the prevalence of existing products and use.
Design-driven companies transfer applications across industries rather than introduce new-to-the-
world technologies. Research and Development departments aim to discover existing technologies
adopted in other industries, in fact, new meanings are often found outside the typical company’s
35
ecosystem. To foster this contamination process, scientists work alongside with designers. In fact,
they can support companies in capturing emerging trends in society and to integrate them with studies
of technologies that allow products to embed appropriate languages and consequently to convey
coherent meanings.
Even though literature focuses on the importance of radical innovation, it seldom proposes successful
methods to radically innovate, although much has been made of various means for inducing creativity.
Radical innovation is driven by two major possibilities:
1. the development of a new enabling technology;
2. the change in meaning of the object.
Note that, by definition, a technology is not enabling until it has reached the point at which it is
available in a reliable, economical form the technological path to radical innovation is reasonably
well understood, even if most such innovations fail at first introduction. Moreover, radical innovation
driven by technology often results from the explorations and dreams of inventors, engineers, and
others who have an inner vision, often driven through self-observation, of what might be possible.
They are not driven by formal studies or analyses.
Within these two possibilities, the second one is surely the most promising as meaning has not been
well studied as an approach to innovation. Research on this issue is still in its embryonic phase.
Design research has far more potential in the space of meaning.
It is possible to reframe the discussions of product innovation in the world of design and management
through three different conceptual tools:
1. Examine the topology of product space, envisioning each product opportunity as a hill.
Human Centered Design methods are well suited for continuous incremental improvements,
but incapable of radical innovation. Radical innovation, comes about only through meaning
or technology change;
2. Consider the two dimensions of meaning change and technology change and to examine how
products move through the resulting space;
3. Innovation might be viewed as lying in the space formed by the two dimensions of research:
the first one aimed at enhancing general knowledge and the second one aimed at applying
research to practice.
36
There are three possible paths to discover technology’s quiescent meaning:
1. Technology screening: when a firm tries to envision new possibilities derived from
technological changes and uses these breakthrough technologies to challenge the status quo
and propose new challenging paradigms;
2. Technology development: when a firm believes in the potential of a new paradigm of what a
product means and symbolize for users and work to bring the new meaning to reality by
heavily investing in R&D to bridge the knowledge gap. In these cases, technology is seen as
the enabler for the realisation of the new meaning;
3. Technology integration: when a firm adopts an emerging technology from a completely
unrelated industry and the technology itself becomes the enabler of a change of meaning.
Figure 2.8 – Different Paths Can Unveil Quiescent Meanings (adapted from Verganti, 2015)
37
2.4. Interpreting and Envisioning New Meanings
The Design-driven innovation is built around the concept of a product “meaning”, which is intended
as the way the user perceives it. Each user has a different perception of a certain product or service,
for this reason, literature stresses the importance of shifting the attention from the “what” (product or
service features) or the “how” (product or service interface) to the “why” (the purpose for which it is
used), in order to identify a technology epiphany.
As above said, whereas literature on management of innovation has deeply explored the antecedents
of radical change of technologies, we still miss a deep investigation of the dynamics of radical change
in meaning. A cause for this lack of investigation is that the nature of innovation of meaning is
peculiar. It involves symbolic, emotional and intangible factors, and these hardly fit within the realm
of existing theoretical paradigms of innovation management.
Besides design management and Italian design companies studies, another important contribution to
the development of the design-driven innovation theory comes from the hermeneutics.
Hermeneutics is derived from the Greek word ἑρμηνεύω (hermeneuō) "translate, interpret".
In sociology, hermeneutics is the interpretation and understanding of social events through analysis
of their meanings for the human participants in the events. Differently than classic innovation
theories, where innovation tends to be considered either as a process of problem solving or as a
process of ideation, hermeneutics provides a framework to look at innovation as a process of
interpreting (developing meaningful scenarios rather than finding an optimal solution) and
envisioning (imagining experiences that are still not asked for, rather than answering to existing
needs). Interpretation is a necessary ingredient while dealing with meanings, that, by definition, are
the results of an interpretative process. Moreover, interpretation does not merely follow a linear
process in which opportunities and ideas are assessed in the light of the existing context but is the
results of continuous interaction among different and evolving actors. The solution comes as a natural
consequence, once a new interpretative paradigm is generated. Envisioning is needed to generate a
radical innovation of meaning. For these reasons, theoretical lens of hermeneutics is a valuable
approach to investigate the radical innovation of product meanings. This process is also known as
generative interpretation.
The hermeneutics framework can be therefore summarized around three central concepts:
1. Interpretation and reflection;
2. Embracing new perspectives in the process of interpretation;
3. The role of the interpreter.
38
One of the biggest differences between technologies and meanings is that meanings are significantly
context-dependent. What is meaningful for users depends on the socio-cultural context in which a
product is used, something that may vary considerably over time and space.
In particular, radical innovations of meaning are outlandish: they are considerably different than the
dominant meaning in an industry, sometimes they even challenge it. Incumbents can hardly recognize
the value of these outlandish meanings unless they question their own dominant assumptions a radical
change in meaning is often coupled with a redefinition of the socio-cultural paradigm in the market,
the redefinition of the accepted interpretations of what a product is, what is meant for (Geels, 2004).
A radical innovation of meaning is not an improvement of something already existing, but something
that still does not exist and need to be created. It is a vision that does not become real until some agent
proposes it to the market and until users give meaning to it. Because of their marked social
implications, a radical change of meaning is often co-generated, which means that meanings cannot
be defined by businesses, but are given by users immersed into a socio-cultural context.
The exploration of radically new meanings is a process of generative interpretation that leads to the
co-generation of new meanings which involves many different actors, in fact, the interpretations of
the meaning of a product occur through continuous interactions among firms, designers, users, and
several stakeholders, both inside and outside a corporation. It implies to develop arguments rather
than finding optimal solutions. Innovation of meaning is in other words a process of generative
interpretation through debates.
This perspective allows therefore to bring in the spotlight a major factor: the role of networks
(especially of external players). Knowing the importance of the social dimension in shaping new
meanings, hermeneutics focuses on the dynamics through which new meanings arise. A central role
is given to the interpreter. A main concept within hermeneutics is that the parts of an action or
situation can only be understood if placed in a context. And vice versa, the context can only be
understood if one understands the parts. This duality is represented by the “reflective circle”,
consisting of an understanding of both the details of a situation and the overall picture. Reflection
implies to move iteratively between the two (Ricoeur, 1984). It includes an active search for a
diversity of interpretations, stemming both from the interpreter herself but also from the external
world. Ricouer wants companies to actively bring in new channels of information, and take different
perspectives. He calls for a continuous “detour”, to lose oneself in an action of “distancing” from the
problem and to “rediscover oneself as another by multiple appropriations” (Kristensson Uggla, 2011).
39
Hermeneutics, therefore, assumes that there is no definite solution, but instead a temporary
understanding, which is continuously evolving and enriching, because, differently than technologies,
product meanings can hardly be optimized.
The radical innovation of meanings stems from the understanding of four characteristics:
1. Meanings are context dependent;
2. Meanings cannot be optimized;
3. Radical meanings are outlandish compared to what currently make sense;
4. Radical change of meaning is co-generated.
The first two characteristics come from the nature of “meanings”, and innovation as a process of
interpreting; the third and fourth characteristics are a consequence of focusing on “radical” innovation
of meanings, and therefore on innovation as a process of envisioning new possibilities.
Case Study – Philips, Ambient Experience for Healthcare
The AEH project radically changed the meaning of medical imaging systems, like CT scanners.
Normal scanners required a relatively long exposure, which meant the patient had to remain still. The
technology-driven solution was to increase the power of the equipment, although at the cost of a
higher radiation dosage. However, using
a design-driven solution, Philips was able
to change the meaning of the experience
from stressful, noisy and threatening to
pleasant and relaxing, simply modifying
the hospital environment. This radical
reinterpretation was not the result of fast,
user-led creative processes, but rather of
years of design research, involving
experts (interpreters) far from flung fields
who helped the Philip design team interpret user needs and behaviours from new perspectives.
As already said, every novel technology embeds the potential for a variety of applications. Some are
more immediate and obvious as they answer to existing market needs, such as the substitution of an
old technology to improve a product performance. Other more profitable applications are not visible
at first sight and are only detected by firms that are willing to question existing market needs and
Figure 2.9 – Philips AEH
40
redefine through the new technology what a product could mean for people. When a firm unveils the
most powerful quiescent meaning of a technology, it celebrates a technology epiphany and becomes
the market leader. To occur, technology epiphanies required the adaption of the business model and
the development process to the new environment.
The success in implementing a technology epiphany strategy appears to be connected to the following
managerial steps:
1. Unveiling Opportunities Hidden in the Technology
The first step is to understand what opportunities the technology provides. Looking at the
technology in terms of existing features and performances will inevitably lead to a simple
technology substitution or to screen it off as not useful. Technologies have many hidden
potential opportunities within them, and the first step is to make them appear (Proni, 2007).
Technology epiphanies do not come at the beginning of the ferment era. On the contrary, they
come after a while and, retrospectively, they appear to have always been clearly written in the
technology itself.
2. Translate the opportunity into a new meaning
This second step requires, first of all, extensive knowledge of the market and of the current
dominant meaning. It is not possible to assess the new meaning as better or worse in
comparison to the previous one; elaborating the new proposal on the ‘why’ dimension instead
of the ‘how’ one, the new meaning defines a new strategic direction.
3. Develop new features to reveal the new meaning
The new meaning creates no value if it remains potential. To make it actual, it is necessary to
reveal it to customers through a whole set of features.
4. Adapt the business model and development process to the new environment
In an environment shaken by a technological discontinuity, the previous business models and
development process will hardly still be effective.
41
Figure 2.10 – Steps of The Technology Epiphany Strategy (Buganza, 2015)
In markets where everyone can easily gain access to new technologies, the big winners often are not
the companies that obtain them first and use them to enhance existing products. They are the
companies that understand how those technologies can be used to create better customer experiences
than existing applications do. And the biggest winners will be companies that learn to systematically
produce one technology epiphany after another. The process through which a company can identify
all the different applications a technology can enable, is called, technology steering.
However, despite the technology epiphany phenomenon was defined by Verganti (2009) no
procedures, methodologies and guidelines have been identified by the recent researches. Indeed, if
we consider that from Verganti (2009) to Buganza et al. (2015), the unit of analysis was usually the
application and not the technology in itself the gap is evident. As previously reported, in the existing
literature the attention and the focus were positioned more in interpreting the applications rather than
on the technology development and integration per se. As a matter of fact, a different way to manage
the technology integration can foresee a more meaningful application field for a technology under
prototyping.
42
2.5. Driving a Culture of Innovation within the Companies’ Boundaries
The research highlights the essential role of leadership in the manner in which the new meaning is
brought to people. Once a new meaning has been envisioned it is fundamental that someone, with a
certain high level within the firm, identifies its potentiality and drives the project forward in the
organization. Having a “meaning-sponsor” who drive the process of bringing the innovative proposal
to people is thus fundamental. To succeed in spreading new meanings, it is very fundamental that the
meaning-sponsor builds a network, as firms’ innovation often relies on the ability to identify and
access valuable knowledge outside their own boundaries (Morillo, Dell’Era, and Verganti 2015).
Hermeneutics offers an important angle to investigate the role of networks in the process of making
sense of things, since external players may significantly affect the way firms reframe their
interpretation of the meaning of products and services. In particular, when it comes to the role of
external networks, the hermeneutic approach allows to appreciate their value in the process of
interpretation by suggesting to actively bring in new perspectives. The emphasis lies in the presence
and inclusion of an external and sometimes unknown network. The key role therefore, is played not
by scientists or creative employees, but by the top management. The leaders, together with a team of
both internal and external interpreters, need to co-create proposals of new meanings in parallel to the
strategic work of vision creation. Innovation of meaning does not come from users, but from
interpretation. The second implication (related to the role of networks) is that in this act of redefining
the framework of interpretation of a firm, external actors, especially those “outside of usual networks”
in the industry, play a major role. They bring a critical stance to what is currently assumed to be
meaningful by a company and add new perspectives in the search for new, profitable, meanings. The
radical innovation of product meanings therefore requires seeing external partners not only as
providers of knowledge and solutions, but also and especially, as providers of arguments and novel
interpretations, in a continuous iterative dialogue. In the same way that designers are able to bridge
languages within industries form a similar context of use (Dell’Era and Verganti, 2009), outsiders are
able to move technologies and specific competences across industries.
The development of technology epiphany requires an intense dialogue between technology partners
and designers. Collaboration with a supply network with varied technical capabilities is one of the
key factors in the development of technology epiphany. They provide several technological solutions
to free the creativity of designers from as many constraints as possible. To attract the most valuable
43
designers and new talents, companies cannot focus only on few technologies; rather, they must
enlarge their portfolio by rotating several technologies (Dell'Era, Marchesi, and Verganti 2010).
A critical factor in generating competitive advantage through design is dynamically innovating design
capabilities, specifically those competences that allow for a fresh look at the opportunities provided
by technologies. Most case and quantitative studies reveal a positive correlation between
collaboration with outside designers/players and performance. The most important factor is to
continuously renew this network of collaborators to access new insights. The success of leading
design-driven companies does not appear to be necessarily related to the choice of a specific designer
but rather to the capability to identify and manage an articulated portfolio of designers (Dell'Era and
Verganti 2010).
To better explain how networks work and how essential they are for achieving radical innovations of
meaning, it can be useful to look at the Philips case study above presented. Some of the interpreters
involved in the project that led to AEH were the kinds of people one would expect to see in an
imaging-devices initiative: doctors, hospital managers, engineers of medical equipment, and
Figure 2.11 – Design-Driven Network (Verganti, 2009)
44
marketing experts. Others, however, came from unusual domains. Finding the right interpreters can
be the innovation turning point: users are often helpful in understanding existing meanings but rarely
so in envisioning new ones. Companies searching for technology epiphanies should turn to
interpreters: experts who study the same use of a product in the same context, but from different
perspectives. Interpreters may come from inside or outside the organization.
To identify unusual but appropriate domains to enlarge their network of interpreters, companies can
follow three steps:
1. Broaden the scope of the analysis to include the user’s whole experience: what is the users’
experience before, during, and after the product is engaged with?
2. Focus on the whole user experience, looking for factors related to that experience that the
organization normally wouldn’t think about during product development, look for unusual
domains that concern with the users’ whole experience but that are usually neglected, and,
finally, consult experts. To identify such experts, a company should seek out people, usually
outside the company’s traditional network, who have conducted research on users’
experiences and have come up with interpretations that challenge the dominant assumptions;
An effective technique for eliciting the insights of interpreters is to observe with them as users
go through an experience; this allows the interpreters to point out behaviours that neither you
nor the users could see and articulate on your own.
3. Once an expert has proved helpful, a company can ask him or her to suggest other people or
organizations they might recruit. The experts do not have to be the most famous people in
their fields, rather, a talented team of young and forward-looking researchers can be more
effective. Indeed, eminent experts who are the source of dominant assumptions may be less
likely than up-and-comers to challenge those assumptions. In addition, if experts are well
known, your competitors are also likely to tap them. Who are the people in each domain doing
research on that experience? Who among them would your competitors overlook? Who are
the emerging researchers who are exploring new perspectives?
New meanings cannot be captured by only “thinking” creatively, but also by “interacting” with others
in society. In fact, in a paper written by Verganti and Oberg in 2013, they suggest to open up
company’s doors to new avenues by listening to new and external interpreters, outside the typical
dominant networks (not simply being immersed into a group of experts, as in communities of practice,
Wenger, 1999). Meanings are co-generated in between different minds that interact with each other.
They come when companies interact with the surrounding world in new and unexpected ways.
45
Technology Innovation
3
46
3.1. Introduction
Technology management can be divided into three main phases: the selection, the development and
the integration. The selection phase, take place at the very beginning of a process, when an innovator
has to choose which technology better fits its needs and, at the same time, has the highest probability
of a large market diffusion, among different emerging technologies. In the past many researchers
have discussed the selection phase, and one of the most important work about this topic has been
written by Porter et. al. in 2004. His theory is known as Technology Future Analysis, it is a set of
models and methodologies that can help companies in selecting the right technology. This chapter
explores some of the main techniques of the TFA: scenario planning, backcasting and roadmapping.
The second phase, the technology development, is the phase of the innovation process when
technology opportunities are explored. The last phase, the technology integration, is related to all the
actions companies put into practice to embed the selected technology into marketable products. The
second part of the chapter is devoted to the analysis of some of the most acknowledge theories that
discuss product innovation. A particular focus is on the social dimension of the integration phase.
3.2. Understanding and Imagining the Future Context
Nowadays the analysis of emerging technologies and their implications are vital to companies, but
also to public corporations, which have a continual pressing need to anticipate and cope with the
direction and rate of technological change. Such analysis guide critical choices ranging from the
multinational level to the individual organization. Decisions that need to be well-informed concern
setting priorities for R&D efforts, understanding and managing the risks of technological innovation,
exploiting intellectual property, and enhancing technological competitiveness of products, processes
and services (Porter, 2004).
The first official account of a systematic outlook on the future of science and technology occurred in
1935 through the New Deal’s National Resource Commission, which started to look into the future
of 13 major inventions. The resulting report was intended to predict the economic and social impact
of these emerging technologies. Forecasting methods have been increasingly adopted throughout the
second World War and even more during the Cold War. There was the need to cope with dramatic
developments in technology such as guided missiles, nuclear weapons and computing. System
analysis became an important tool in designing such complex systems. The military-industrial
47
complex needed ways of anticipating levels of performance in weapons and components, and ways
to set feasible performance goals.
The goal of the next decades was on forecasting the rate of technological change. Quantitative
exploratory methods, working from the past to the future, included trend extrapolation, leading
indicators and growth models. However, disillusionment with system analysis spread in the late 70s
and 80s as it was realised that the uncertainties of technology development defied many “system
analysis solutions. Finally, the decade f the 1990s has initiated an upsurge in technology forecasting
activities, using both old and new techniques.
From a large company perspective, innovation increasingly depends on collaboration on aspects
ranging from research to product development to customer service. That demands more external
information than for vertically integrated companies.
Small companies have traditionally relied on innovativeness to survive. In the early decades of the
21st century, rapid technological change implies that they also need to be technologically informed.
In the past, small firms often plead lack of time and resources to invest in forecasting technologies
(V. Coates, 2001).
It is fundamental that any actor involved in the development of an innovation is informed on where
the related technologies are likely to be heading.
This information will pay off in avoiding dead-end initiatives and unexpected events, and in seizing
technological opportunities in the competitive marketplace.
3.2.1. Technology Future Analysis (TFA)
Technology future analysis, or TFA, represents any systematic process to produce judgments about
emerging technology characteristics, development pathways and potential impacts of a technology in
the future (A. Porter et al., 2004). It embraces at the same time technology foresight and technology
forecasting techniques.
Technology foresight refers to a systematic process to identify future technology developments and
their interactions with society and the environment for the purpose of guiding actions designed to
produce a more desirable future. Technology forecasting instead, is the systematic process of
describing the emergence, performance, features or impacts of a technology at some time in the future.
48
Many forms of analysing future technology and its consequences coexist. Examples are technology
intelligence, forecasting, roadmapping and foresight (A. Porter, 2003). All these techniques fit into a
field called technology future analysis. These methods have matured rather separately, with little
interchange and sharing of information on theories and processes. As a consequence of this unrelated
progress, there are many overlapping forms of forecasting technology development and their
implications. Examples of some of the latest developed forecasting methodologies are environmental
scanning, models, scenarios, Delphi, extrapolation, probabilistic forecast, technology measurement
and some chaos-like behaviour in technological data.
3.2.1.1. Scenario Planning, Backcasting and Roadmapping
“Scenario” was introduced into the common language as a term to describe a movie setting. However,
the most recent impetus in the popularization of scenarios is to consider them as a planning device.
Schumpeter said: “We always plan too much and always think too little”. To this regard, Godet
highlighted in 2000, that too frequently the word scenario is confused with strategy. Strategic
planning has been developed during the ‘60s to help organizations master change. Subsequently,
various methodologies and procedures have been developed in order to stimulate the imagination,
reduce collective biases and promote appropriation, and scenario building is one of them.
One of the main functions of the scenario building is to eliminate two errors usually described as the
“hammer’s risk” and “nail’s dream”: people forget what a hammer’s function is when staring at a nail
(the nail’s dream) and they know how to use a hammer and imagine that every problem is like a nail
(the hammer’s risk). The metaphor refers to the tools usually adopted to carefully build a scenario,
which are intended to be simple and inspired by intellectual rigor but often fail in producing expected
results. The reason lay behind the highly subjective nature of scenario building, which deeply rely on
natural talent, common sense and intuitions. Both rational and emotional thinking are necessary and
complementary attitudes in scenario building.
One of the most accepted definition of planning was put forward by Russel Ackoff in 1969: “Planning
is to conceive a desired future as well as the practical means of achieving it”. Although the importance
of anticipation is well known among managers since decades, it has been overlooked for a long time
and not widely practiced by decision makers. The reason is to be found in their lack of far-sightedness:
when things are going well, they can manage without any anticipation tools, and when things are
going badly, it is too late, fast action is already urgently required and there is no time for planning.
This action-leaded behaviour, although it is desirable in the short term, becomes meaningless without
49
a goal. Only through anticipation, and scenario planning, actions are coherently addressed, having a
clear meaning and a direction.
Scenario building can be classified into two broad groups:
1. Scenarios that tell about some future state or condition in which the institution is embedded.
This typology of scenario building is used to stimulate users to develop and clarify practical
choices, policies and alternative actions that may be taken to deal with the consequences of
the scenario built. Its main feature is to largely stimulate thinking.
2. Scenarios that assume that a policy has been established. The policy and its consequences are
integrated into a story about some future state. This type of scenario, rather than stimulating
the discussion of policy choices, displays the consequences of a particular choice or set of
choices. It is a tool for explaining or exploring the consequences of some policy decision,
either hypothetically or actually made.
The reason why scenarios have lately become so popular in the business community is that the world
has become more complex and at the same time it presents larger elements of ignorance and
unfamiliarity. Scenarios help dealing with the complexity of new factors.
Figure 3.1 – Scenario Cone Showing Multiple Possibilities (Bertoluci, 2013)
50
As already mentioned, to create scenarios it is fundamental to have the individual skills or the gift for
imagining them. However, it is possible to learn some guidelines and general rules of scenario
building, as well as, improving thanks to guidance and feedbacks. Here below, the lists of the main
steps of scenario building proposed by Joseph Coats (2000), for building those scenarios that describe
a range of alternative futures:
1. Identify and define the universe of concern that the business is dealing with;
2. Define the variables that will be important in shaping that future. Usually from 6 to 20
variables for complex scenarios. This is one of the most intense and critical activities while
creating a scenario.
3. Identify the themes for scenarios. This task is judgmental, creative and depends upon
experience as there are an intrinsically unlimited number of scenarios that can be built from a
large set of variables. It is usual to work with 4 to 6 themes. The principle to identify the
themes is to illustrate the most significant kinds of potential future developments.
4. Create the scenario by exploring a theme. Each of the variables should be analysed in order
to define a plausible value, quantitative or qualitative, to assign to them.
5. Write the scenarios. If several people are involved in the task, different members of the team
undertake to write different scenarios. They can be in various formats: speech, articles, memo
etc.
6. The team comes together for reading, review and evaluate the developed scenarios. The
process goes back and forth, and may be repeated two to three times until each of the scenarios
is in a satisfactory condition.
7. An optional step is to have one person go through all of the scenarios defined by the team to
give them a uniform style.
To complement the process of scenario validation (6), I. Wilson suggested five criteria to follow
(1998):
1. Plausibility: the selected scenario has to be capable of happening;
2. Consistency: the combination of logics in a scenario has to ensure that there is no built-in
internal inconsistency and contradiction;
3. Utility/relevance: each scenario should contribute specific insights into the future that help to
make the decision;
4. Challenge/novelty: the scenario should challenge the organization’s conventional wisdom
about the future;
51
5. Differentiation: they should be structurally different and not simple variations on the same
theme.
The strength and, at the same time, limitation of scenario planning is that they are often descriptions
of the external world the organization must respond to. It is possible to enlarge their power by building
the so called, normative scenario, which reports explicit actions the company takes to influence its
position in the market.
To successfully adopt scenario building as a tool for driving companies’ strategy, it has been
highlighted, particularly from Renata Karlin in 1998, that they are not to be presented as forecasts but
as possible futures with a view to stimulating internal awareness and thinking about the future.
A particular technique used for scenario building is backcasting. Backcasting is a planning method
that starts with defining a desirable future and then works beackwards to identify policies and
programs that will connect the future to the present.
The origin of backcasting date back to the 1970s, when Lovins proposed it as an alternative planning
technique for electricity supply and demand: assuming that future energy demand is mainly a function
of current policy decisions, Lovins suggested that it would be beneficial to describe a desirable future
and assess how such a future could be achieved instead of focusing only on likely futures.
From the beginning, it was stressed, in particular in Robinson’s work Future under glass: a recipe for
people who hate to predict, that the purpose of backcasting was not to produce blueprints, but to
indicate relative feasibility and implications of different futures.
After the initial stage, it has been realised that backcasting could have a much wider potential for
application, due to its characteristics and its normative nature. In particular backcastng has been
proposed and tested to deal with complex system innovations. These kind of radical changes, are
complicated due to the inherent uncertainty of the future and the inherent ambiguity of stakeholders
having different value sets and mental frameworks. The reason why backcasting is a suitable tool to
tackle system innovations is explained by Dreborg, who argues that traditional forecasting is based
on dominant trends and is therefore unlikely to generate solutions based on breaking trends.
Backcastng approaches instead, due to their normative and problem-solving character, are much
better suited for long-term problems and solutions. Scenarios of a beckcasting project should broaden
the scope of solutions to be considered, describing new options and different futures.
During the 1990s, in the Netherlands, backcasting started to use a broad stakeholder involvement.
Processes were characterised by iterations and continuous interactions with a wide set of actors with
feedback between future visions and present actions. This technique is called participatory
backcasting. Though most approaches found in the literature show differences in methods applied, it
52
is possible to generalise and translate these into a methodological framework consisting of five stages
(J. Quist, P. Vergragt, 2006):
1. Strategy problem orientation;
2. Construction of sustainable future visions or scenarios;
3. Backcasting;
4. Elaboration, analysis and defining follow-up and agenda;
5. Embedding of results and generating follow-up and implementation.
The approach should not be intended as linear, on the opposite, iteration cycles are possible as well
as mutual between two steps. The process has a dynamic nature, with stakeholders that might leave
and other once that might join in.
Similarly to backcasting, roadmaps are employed as decision aids to improve coordination of
activities and resources in increasingly complex and uncertain environments. Generically, a “road
map” is a layout of paths or routes that exists (or could exist) in some particular geographical space.
In everyday life, road maps are used by travellers to decide among alternative routes toward a physical
destination. Thus, a road map serves as a traveller’s tool that provides essential understanding,
proximity, direction, and some degree of certainty in travel planning. Lately, the single word
“roadmap” has surfaced as a popular metaphor for planning science and technology resources. In this
last context, roadmap provides a consensus view or vision of the future science and technology
landscape available to decision makers. The roadmapping process provides a way to identify,
evaluate, and select strategic alternatives that can be used to achieve a desired technological objective.
There are certainly more future alternatives, however, the process of roadmapping helps narrow the
field of requirements and possible solutions to those most likely to be pursued. At the application
level, a product–technology roadmap is a disciplined, focused, multiyear, business planning
methodology. For the product manager, a roadmap’s implement degree, is as important as its strategic
value.
53
Figure 3.2 – Example of Roadmap with Future Technology Alternatives (adapted from Bray, 1998)
According to Radnor (1998), technology, product, and related forms of corporate/industry
roadmapping are being implemented gradually in large-scale technically centered firms. To date, the
published literature on roadmapping is sparse; however, a significant amount of industry-based
information is available. From this variety of uses, a taxonomy was established in the attempt to
classify roadmaps according to their location in applications–objectives space. These independent
roadmap applications can be classified broadly as follows:
1. Science and technology maps or roadmaps;
2. Industry technology roadmaps;
3. Corporate or product-technology roadmaps;
4. Product/portfolio management roadmaps.
Garcia and Bray in 1998 underscore the major uses of and benefits derived from technology
roadmapping:
1. roadmaps help develop consensus among decision makers about a set of technological needs;
2. roadmapping provides a mechanism to help experts forecast technology developments in
targeted areas;
3. roadmaps present a framework to help plan and coordinate technology developments at any
level: within an organization/company, throughout an entire discipline/industry, even at cross-
industry/national or international levels.
Overall, the main benefit of roadmapping is provision of information to help make better science and
technology investment decisions.
54
3.3. The Problem-Solving Approach
Strategies and modes of experimentation can be an important factor in the effectiveness of a firm’s
innovation process and its relative competitive position. The outputs of R&D, such as new research
and findings and new products and services, are often generated with the aid of specialized problem-
solving processes. These processes can significantly affect the kinds of research problems that can be
addressed, the efficiency and the speed with which R&D can be performed, and the competitive
positions of firms employing them. The rapid improvement of the problem-solving processes has
generated higher efficiency and effectiveness of the producible outputs.
Research into the nature of problem-solving shows that it consists of trial and error, directed by some
amount of insight as to the direction in which a solution might lie (Barron 1988). This problem-
solving process through trial and error is usually referred to as experimentation. It begins with the
selection or creation of one or more possible solutions. These are then tested against an array of
requirements and constraints (Simon, 1969). The new information provided by a trail-and-error
experiment to an experimenter are those aspects of the outcome that he or she did not know or foresee
in advance. The process can be seen as cycles that repeatedly generate and test design alternatives
(Simon, 1969).
Experimentation is often carried out using simplified versions, called models, of the eventually-
intended test object or environment. The value of using models is two-folded: to reduce investment
in the aspects of reality that are irrelevant for the experiment, and to control some aspects of reality
that would affect the experiment in order to simplify analysis of the results. It is of fundamental
importance to identify which are the relevant variables to include in the model as its incompleteness
of a model is a source of unexpected errors. The execution of an experiment can be seen as involving
a four-step iterative cycle (S. Thomke, 1998). At the end of each iteration it is possible to improve
the design many times until the marginal cost to improve will exceed the marginal benefit from the
improvement.
Thomke coined the term “experimentation efficiency”, defined as the economic value of information
learned during an experimental cycle, divided by the cost of conducting the cycle. When an
experiment is costly, and the incremental value of information learned is small, the efficiency is
considered to be low. The effectiveness of an experiment does not depend solely on the adopted
technique, but also on the choices made by the experimenter.
Novel experimental methods can importantly affect the relative competitive position of firms if
techniques that offer such advantages are difficult or impossible to buy or sell in the available factor
markets, and difficult to replicate as well. It is likely that the methods that are the hardest to transfer
55
to new users will be the ones that offer the greatest competitive advantage to method users, while
method sellers are likely to appropriate the most benefit from methods that are easily transferred.
3.4. Innovation Strategies
Broadening the perspective, the technology management process lies within the wider process of
innovation management. The process can be viewed as an evolutionary integration of organization,
technology and market. As identified by G. Pisano, despite massive investments of management time
and money, innovation remains a frustrating pursuit in many companies. According to him, the
reasons behind the missed innovation achievement, go much deeper than the commonly cited cause:
a failure to execute. The problem with innovation improvement efforts is rooted in the lack of an
innovation strategy.
A strategy can be defined as the commitment to a set of coherent, mutually reinforcing policies or
behaviours aimed at achieving a specific competitive result. A good strategy can promote alignment
among diverse functions of an organization, clarify objectives and priorities, and help focus efforts
Figure 3.3 – Thomke Experimentation’s Framework (1998)
56
around them. Companies regularly define their overall business strategy and specify how various
functions will support it. However, it is observed that, firms rarely articulate strategies to align their
innovation efforts with their business strategies. Even though a company may adopt a series of best
practices for innovation, if they are not well-organized in coherent sets of interdependent processes
and structures that state how the company searches for novel solutions, synthesizes ideas into a
business concept and selects which projects get funded (the so called, innovation system), a company
won’t be able to make trade-off decisions and it may suffer from a misalignment of priorities.
Here below, are presented four innovation strategies that particularly fit the situation of technology
based companies that need to innovate in truly dynamic and changing environments, with
competitors’ pressure and therefore, limited time to act: the continuous product innovation, the
organizational adaptability, the collaborative community model and industry platforms and
ecosystems.
3.4.1. The Continuous Product Innovation
Many companies are challenged by a very strong pressure due to the performance their products have
to possess to successfully address customer needs. In order to combine excellence in both satisfying
today's customer needs and anticipating the demand of tomorrow's customers, firms are required to
combine the capability of exploiting old certainties and exploring new possibilities, meaning that, in
order to sustain competition, firms are required to be excellent in both incremental and radical
innovation.
Tomorrow's customers ask for products with higher and higher levels of performance, or attribute
sets, that are possible through the incremental innovation of the technology embedded in the product,
but also for new products with different attribute sets from the existing products, that are the result of
radical innovation processes.
As a consequence of this double focus both on radical and incremental innovation, in recent years,
the debate regarding organizational theory and strategy has shifted from the sustainability of
competitive advantage to the capacity to manage innovation and change (Brown, 1997). Today, in a
world of constant change, constant innovation is the only possible answer. To respond to this
increasing pressure to innovate, companies, starting from the early 1980s, began to put a growing
emphasis on the role of product innovation management as a potential source of continuous change.
57
New products have been indicated as the most natural driving force behind change and renewal at the
corporate level. Introducing new products in the market on a regular basis has been considered the
most effective way of turning change into an endemic and continuous process. Managers understood
that many possibilities to innovate bear simply by progressively shifting the attention from the single
product to families of related products, and also, that relevant competitive advantages can be gained
by extending innovation to later phases in the product life-cycle that are not part of the development
process. These approaches changed the way New Product Development (NPD) projects are seen
within companies. They are no more isolated work of an appointed team, instead they are steps within
a corporate-wide process of knowledge creation, embodiment and transfer to which Bartezzaghi et
al. (1998) refer as continuous product innovation (CPI).
In this perspective, innovation is a continuous and cross-functional process of learning and
improvement involving and integrating a growing number of competencies inside and outside the
organisational boundaries.
Summing up, two dimensions are combined in the CPI model:
1. CPI embrace not only NPD but also subsequent phases in product life-cycle;
2. CPI moves the traditional perspective from a single product to a family of products.
Moving from these two dimensions, to assure the fulfilment of companies’ innovation objectives, one
of the primary managerial tasks is to stimulate the knowledge creation, its embodiment and transfer.
In the CPI model, in fact, product innovation is strongly influenced by the company’s knowledge
management ability and attitude. Knowledge management (KM), is the process of creating, sharing,
using and managing the knowledge and information of an organization. In particular, continuous
innovation requires the simultaneous presence of three fundamental processes at the organizational
level: knowledge creation and absorption, knowledge integration and knowledge reconfiguration.
58
Today many streams of the literature, coming from different perspectives, theorize the necessity to
involve people in the innovation activities as all the employees are reckoned to have the capacity to
solve problems creatively, so amplifying the organizational innovative base. An important element
of the CPI process, from the KM and learning perspective, is in fact, the continuous and widespread
participation in the problem-solving activities. The consequences lie in three major effects: the
mobilization of the organizational core knowledge base, the systematic capture and conversion of
tacit knowledge into explicit knowledge, which can be articulated in standard operating procedures,
and the possibility to activate learning processes. It is in fact recognized that through the product
development process organizations are able to integrate dispersed knowledge of a different nature
into an innovative way and thus generate effective, new knowledge.
In the KM of a company, an important role is played by the development of dynamic capabilities. In
organizational theory, dynamic capability is the capability of an organization to purposefully adapt
an organization's resource base. The concept was defined by David Teece, Gary Pisano and Amy
Shuen, in their 1997 paper Dynamic Capabilities and Strategic Management, as "the firm’s ability to
integrate, build, and reconfigure internal and external competences to address rapidly changing
environments."
Dynamic capabilities that a company has to develop in order to sustain and enhance continuous
innovation are:
1. Dynamic capabilities that allow the simultaneous and continuous creation, absorption and
integration of knowledge. As maintained by Cohen and Levinthal (1990), the ability to acquire
knowledge is directly related to the presence of previous related knowledge, meaning that
Figure 3.4 – The Continuous Innovation Processes (Mohapatra, 2016)
59
firms must already have invested in technical knowledge if they want to benefit from the
knowledge they absorb;
2. Dynamic capability that emerge from a context that spurs creativity from all parts of the
organization at any time. The theory of dynamic capabilities has already shown the relevance
of continuous learning in order to introduce new product innovations and adapt to changing
market conditions;
3. Each dynamic capability, in order to be effective must be composed of four company
resources: those regarding human and physical capital, structures and systems, and company
culture. Each capability must also be designed to be coherent and fit their task perfectly. These
four organizational resources therefore represent the building blocks of product innovation;
4. Dynamic capabilities that enhance the complex process of adaptation that must take place at
the organizational level whenever firms face a change in technological and market conditions.
Managing product innovation as a continuous process is a challenging task presenting many
difficulties and requiring new skills and competencies at all levels in the organisation.
At an operative level, CPI requires the integration and involvement of many actors outside the
traditional R&D boundaries. CPI also requires rethinking the traditional role of management
coherently with its intrinsically bottom-up non-hierarchical nature. Rather than supervising and
planning innovation, the prime concern of management should be fostering knowledge creation and
transfer by giving individual and groups, at all levels, opportunities and tools to learn from their own
and others' experience and to use this learning to innovate the product according to corporate
priorities. Effective management of CPI, therefore, entails stimulating and sustaining behaviours at
organisational level concerning knowledge generation, sharing and transfer inside and outside
organisational borders. Being a cross-disciplinary process based on interactions and direct
communication among a wide set of heterogeneous competencies it requires high qualifications and
a system view in order to assure effectiveness and mutual understanding.
60
3.4.2. The Organizational Adaptability
Organizational structure and its characteristics have always been interesting research objects.
Particularly, many researchers have lately conducted studies concerning which organizational
structure better perform in sustaining innovation within companies’ boundaries. With the change of
the competitive market conditions and the evolution of the different industries throughout the years,
many different strategies have proven to be winning solutions. However, all the successful companies
that have been leader in their own industry shared a characteristic: the adaptability. The goal of most
strategies is to build an enduring competitive advantage by establishing clever market positioning or
assembling the right capabilities and competencies for making or delivering an offering, but given
the new level of uncertainty, sustainable competitive advantage no longer arises exclusively from
position, scale, and first-order capabilities in producing or delivering an offering. All those features
are essentially static. Increasingly, managers from top level companies are finding that competitive
advantage stems from the “second-order” organizational capabilities that foster rapid adaptation.
Instead of being really good at doing some particular thing, companies must be really good at learning
how to do new things. Thus, companies’ adaptability played a fundamental role in assuring the
business survival during the deep context transformation that occurred in the last decades. Recent
empirical evidence suggests that adaptability is a source of both sustainable competitive advantage
and success in new product development (NPD) and commercialization. It has been the distinguishing
feature that fostered companies in the transition from old to new organizational models.
Adaptability can be defined as a firm's ability to identify and capitalize emerging market and
technology opportunities, as well as human, informational, and financial resources.
Managers are assumed to consciously modify the alignment of the firm to its environment in the form
of adapting technology, organizational structure, and business processes. In order to adapt, a
company must have its antennae tuned to signals of change from the external environment, decode
them, and quickly act to refine or reinvent its business model and even reshape the information
landscape of its industry.
Empirical findings indicate that the components of adaptability have a strong positive association
with a high level of innovativeness. Furthermore, the statistical analyses verify the underlying
mechanism that contingency factors significantly influence the interplay between adaptability and
innovativeness.
61
One of the principal advantage that adaptive companies can exploit, relates the strategy of
experimentation. The traditional approaches can be costly and time-consuming, and may saddle the
organization with an unreasonable burden of complexity. To overcome these barriers, a growing
number of adaptive competitors are using an array of new approaches and technologies, especially in
virtual environments, to generate, test, and replicate a larger number of innovative ideas faster, at
lower cost, and with less risk than their rivals can.
In addition to changing the way in which they conduct experiments, companies need to broaden the
scope of their experimentation. Traditionally, the focus has been on a company’s offerings, essentially
new products and services, but in an increasingly turbulent environment, business models, strategies,
and routines can also become obsolete quickly and unpredictably. Adaptive companies therefore use
experimentation far more broadly than their rivals do. Finally, experimentation necessarily produces
failure. Adaptive companies are very tolerant of failure, even to the point of celebrating it.
The past three decades have witnessed considerable evolution in organizational designs. Miles and
Snow (1978) presented a theoretical framework that describes how organizations adapt to their
environments. Their framework has two main components: the adaptive cycle and four types of
organizations.
The adaptive cycle refers to the dynamic process in which organisations continually adjust internal
interdependencies to environmental opportunities and risks. With respect to the adaptive cycle, firms
constantly face adaptive challenges that can be classified into three broad categories: the
entrepreneurial problem, the engineering problem, and the administrative problem. In mature
organizations management must solve each of these problems simultaneously but to better understand
their nature, it is possible to consider them as if they occurred sequentially.
The entrepreneurial problem refers to the various domains the firm chooses to operate in—its products
and services, types of customers, and the geographic spread of its target markets. It is how a company
should manage its market share.
The engineering problem encompasses the technologies by which the firm produces its products and
services as well as the distribution systems used to deliver products and services to customers. It
involves how a company should implement its solution to the entrepreneurial problem.
62
Last, the administrative problem refers to the organizational structures and management processes
the firm uses to operate on a continuing basis. It considers how a company should structure itself to
manage the implementation of the solutions to the first two problems.
Miles and Snow (1984) argued that a firm’s overall strategy must fit its environment (external fit),
that organizational structures and management processes must be aligned with strategy (internal fit),
and that the entire organization must continually adapt to maintain fit over time (dynamic fit).
To these problems, Miles and Snow suggested that many companies develop similar solutions. As a
result, they postulated that there are four general strategic types of organizations: prospector,
defender, analyser, and reactor organizations.
The original studies conducted by Miles, Snow, and their colleagues in the 1970s showed that there
are three common routes that firms can take as they move through the adaptive cycle: prospector,
defender, and analyser. Each of these labels indicates the strategy the firm uses to compete in its
chosen markets, and each type has its own management system that is specifically suited to its
strategy. Each type has its own unique strategy for relating to its chosen market(s), and each has a
particular configuration of technology, structure, and process that is consistent with its market
strategy.
A fourth organizational type has been later added to the three original strategic types: the reactor. The
reactor is a form of strategic failure, meaning that inconsistencies exist among its strategy,
technology, structure and process.
The four types are described as follow:
Figure 3.5 – The Adaptive Cycle Challenges (Miles, Snow, 1987)
63
1. Prospectors are firms that continually develop new products, services, technologies, and
markets. They achieve success by moving first relative to their competitors, either by
anticipating the market based on their research and development efforts or by building a
market through their customer-relating capabilities;
2. Defenders are firms with stable product or service lines that leverage their competence in
developing process efficiencies. They search for economies of scale in markets that are
predictable and expandable;
3. Analysers are firms that use their applied engineering and manufacturing skills to make a new
product better and cheaper, and they use their marketing resources to improve product sales.
They search for proven technologies with significant potential for generating new products
and services;
4. Reactor are firms that do not have a systematic strategy, design, or structure. They are not
prepared for changes they face in their business environments. Their new product or service
development fluctuates in response to the way their managers perceive their environment.
Reactor organizations do not make long-term plans, because they see the environment as
changing too quickly for them to be of any use, and they possess unclear chains of command.
Viewed from the perspective of the industry as a whole, innovation often occurs because all three
strategy types are present (Miles, Snow, and Sharfman, 1993). That is, analysers follow prospectors
into new markets but tend to focus on those markets in which they already have products that can be
enhanced or in which they have a particular process advantage. The unique capability of analysers
lies in their ability to envision the market potential for a new product or technology and their skill in
rapidly commercializing innovations—in essence, the ability to extend a technology to a larger
domain than that envisioned by its originators (Haanæs and Fjeldstad, 2000). Thus, whereas
prospectors seek returns based on their ability to invent, analysers seek returns based on their ability
to perform product modifications and enhancements using established technologies. For their part,
defenders focus on standardizing the technologies and products developed by other firms while
lowering overall costs by becoming increasingly efficient.
three intertwined aspects of adaptability, which we call technology mode, market focus, and
organizational design (cf. Miles and Snow, 1978).
1. Technology Mode - Technological Aspect of Adaptability: From a strategic point of view, the
technology mode can be closely linked with the efficient use of a firm's resources, skills, and
64
competencies, which, in turn, effects organizational learning in the context of the technologies
deployed;
2. Market Focus - External Aspect of Adaptability: We expect that a company's market focus
can be defined on the basis of the broadness of its customer base. The market focus dimension
places firms on a continuum (narrow-broad) according to the extent to which they adapt to
their market environment and target their market opportunities;
3. Organizational Design - Internal Aspect of Adaptability: The structural dimension consists of
elements related to reporting relationships and configuration issues, such as centralization,
formalization, and standardisation;
Different organizational forms will be described in the following paragraphs, as a support for the
understanding of the concepts presented, the definition of Community, Ecosystem and Platform are
synthetically reported here below (from the Technology Innovation Management Review):
1. The community is an organization of people. A voluntary group of people with common
interests and a similar sense of identity. Communities can take many different forms;
2. The ecosystem is an organization of economic actors. A business ecosystem is a field of
economic actors whose individual business activities, anchored around a platform, share in
some large measure the outcome of the whole ecosystem;
3. The platform is an organization of things. A set of technological building blocks and
complementary assets that companies and individuals can use and consume to develop
complementary products, technologies, and services.
3.4.3. The Collaborative Community Model
Historically, firms attempted to commercialize their newly invented technologies by ‘‘going it
alone’’, relying mostly if not entirely on their own ideas and resources to achieve success in the
marketplace. However, with the advent of newer organizational forms such as multi-firm network
organizations and community-based organizational designs, firms with complementary technological
and marketing capabilities frequently work together to develop new products and services.
the collaborative community of firms model as the most recent organizational approach designed to
achieve continuous product development and commercialization
65
During the post-World War II years, prospector, defender, and analyser firms tended to operate as
independent, self-contained firms. Beginning in the 1970s, however, the U.S. competitive landscape
began to change dramatically. most large hierarchically structured firms of the time struggled to adapt
to the global economy’s changing markets and technologies. Only after much organizational
disruption and upheaval, which produced a variety of new adaptive mechanisms such as outsourcing,
off-shoring, downsizing, and delayering, did a new organizational form emerge that appeared to have
the potential to reverse the declining competitiveness of many U.S. firms.
That organizational form is called the multi-firm network (Miles and Snow, 1986; Thorelli, 1986).
The multi-firm network model offered significant organizational improvements in both effective
market exploration and efficient operations over the traditional model of the self-contained, vertically
integrated firm. Network organizations are different from traditional hierarchical organizations in
several respects. First, instead of holding in-house all the resources required to produce a given
product or service, networks use the collective assets and resources of several (or many) firms located
along the industry value chain (Porter, 1985). Second, networks rely heavily on market mechanisms
to manage decision-making processes and resource flows (Halal, Geranmayeh, and Purdehnad, 1993.
members of the network recognize their interdependence and are willing to share information, to work
with one another, and to customize their product or service, all to maintain their position within the
network. Last, many networks expect their members to play a proactive role, to voluntarily engage in
behaviour that improves the final product or distribution system rather than simply fulfilling a
contractual obligation.
The multi-firm network organization combines its members’ complementary resources and activities,
and it allows each firm to leverage its particular set of capabilities. A network organization’s greater
combinatory flexibility reduces innovation time, enhances commercialization opportunities by
exploiting downstream partners’ market access (Hagedoorn, 1993), and allows exploration-oriented
firms to exploit the efficiency of their network partners.
The adoption of the multi-firm network model is particularly frequent among firms in knowledge-
intensive industries such as telecommunications equipment and biotechnology and the motivation is
to accelerate and broaden their joint learning.
Beginning in the late 1980s and early 1990s, especially in so-called high-velocity environments
(Eisenhardt, 1989; Eisenhardt and Brown, 1998), firms began to face a new set of challenges and
moved from what Miles and Snow (1994) referred to as ‘‘stable’’ networks toward a ‘‘dynamic’’
network form. In high-velocity environments, rapid technological and market changes challenge the
stability of particular network configurations because changes in product or service components often
66
require forming new cross-firm relationships while dropping others. As a result, network member
firms often find themselves moving in and out of particular networks and markets.
Beginning in the late 1990s and continuing to the present, firms began to move toward a new business
model: the ‘‘community’’ model.
A number of firms are presently establishing or joining communities of other firms with the overall
purpose of enabling knowledge sharing and providing mechanisms and infrastructure services that
improve the participants’ ability to network both within the community and outside the community.
Communities nurture the capabilities of their members, and they provide shared services that allow
the firms to collaborate with one another and to accomplish more than they could achieve on their
own. A community of firms is a form of organization in which independent member firms network
with one another but also commit to a set of shared values and norms and where there are mechanisms
to exert moral suasion and to extract compliance from members.
The overall purpose of such communities is to provide an ongoing, trust-based environment in which
firms can share technical and market knowledge with both current and potential partners without fear
of exploitation and with the expectation of common gain.
Notably biotechnology, computers, telecommunications equipment, medical equipment, and
nanotechnology, pioneering firms are currently exploring the community model for the purpose of
assuring the full utilization of continuously developing knowledge.
Community Forms
Although there are hundreds of thousands of existing innovation-related communities that come in a
variety of forms, the vast majority of them can be classified using two main dimensions: (1) the
predominant means of participation (closed vs. open); and (2) the predominant governance structure
(hierarchical vs. flat) (Pisano and Verganti, 2008). The most effective form of community, regarding
the innovation purpose, has been proven to be open participation and a flat governance structure, for
these reasons, reputation is important in the formation of relationships within the community because
it is a signal of trustworthiness.
The most recent type of community is focused on the innovation and commercialization of
technology. In this type of community, a particular innovative technology has a large, but not fully
understood, market potential. Here the purpose is to provide an arena in which firms that are members
of the community can work with one another to develop products and services based on the
67
technology. The innovation and commercialization capacity of such a community is much larger than
the aggregate capacity of individual firms working alone.
The organizational challenges faced in commercializing a new technology are related to dynamically
piecing together all of the components and actors required to develop, market, and install the derived
products or services. Echoing this, the motives usually cited for the formation of strategic alliances
are obtaining access to new markets and technologies, speeding products to market, and pooling
complementary skills and resources. The community supports its member firms in the pursuit of
common objectives, enhancing their capabilities through collective learning and providing the
infrastructure needed for building network relationships among members.
The community model offers member firms the opportunity to collectively develop capabilities and
to increase the efficiency and effectiveness of their networking and collaboration. The collaborative
community of firms model could increasingly be used in situations where the market potential of a
new technology is not foreseeable, or it is crucial to ‘‘win the race’’ in terms of becoming the industry
standard.
Figure 3.6 – The Community Model (Pisano, 2008)
68
3.4.4. Industry Platforms and Ecosystems
A platform is a “business model that uses technology to connect people, organizations, and resources
in an interactive ecosystem” said, Geoffrey Parker, Marshall Van Alstyne and Sangeet Choudary in
“Platform Revolution: How Networked Markets are Transforming the Economy and How to Make
Them Work for you”.
What managers and researchers refer to as platforms exist in a variety of industries, especially in
high-tech businesses driven by information technology. These firms and their hundreds if not
thousands of partners also participate in platform-based “ecosystem” innovation (Iansiti and Levien,
2004; Moore, 1996). Platforms are distinct in that they are often associated with “network effects”:
that is, the more users who adopt the platform, the more valuable the platform becomes to the owner
and to the users because of growing access to the network of users and often to a growing set of
complementary innovations. In other words, there are increasing incentives for more firms and users
to adopt a platform and join the ecosystem as more users and complements join.
The first popular usage of the term platform seems to have been in the context of new product
development and incremental innovation around reusable components or technologies. These are
usually referred to as internal platforms in that a firm, either working by itself or with suppliers, can
build a family of related products or sets of new features by deploying these components.
Researchers have identified, with a large degree of consensus, several potential benefits of internal
platforms: savings in fixed costs; efficiency gains in product development through the reuse of
common parts and “modular” designs, in particular, the ability to produce a large number of derivative
products with limited resources; and flexibility in product feature design. One key objective of
platform-based new product development seems to be the ability to increase product variety and meet
diverse customer requirements, business needs, and technical advancements while maintaining
economies of scale and scope within manufacturing processes—an approach also associated with
“mass customization” (Pine, 1993).
The empirical evidence indicates that, in practice, companies have successfully used product
platforms to increase product variety, control high production and inventory costs, and reduce time
to market (Inasiti and Levien, 2004).
External or industry platforms can be defined as products, services, or technologies developed by one
or more firms, and which serve as foundations upon which a larger number of firms can build further
complementary innovations and potentially generate network effects. There is a similarity to internal
69
platforms in that industry platforms provide a foundation of reusable common components or
technologies, but they differ in that this foundation is “open” to outside firms.
Industry platforms tend to facilitate and increase the degree of innovation on complementary products
and services. The more innovation there is on complements, the more value it creates for the platform
and its users via network effects, creating a cumulative advantage for existing platforms: as they grow
in adoption, they become harder to dislodge by rivals or new entrants, with the growing number of
complements acting like a barrier to entry.
Perhaps the most critical distinguishing feature of an industry platform compared to an internal
company platform or supply chain is the potential creation of network effects. these are positive
feedback loops that can grow at exponentially increasing rates as adoption of the platform and the
number of complements rise.
Researches about the disruption of industries by platforms, have highlighted characteristics that make
a particular industry especially susceptible (Iansiti and Levian, 2004). Here are some of the types of
businesses that are most likely to join the platform revolution, according to the researches:
1. Information-intensive industries: in most industries today, information is an important source
of value, and the more crucial information is as a value source, the closer the industry is to
being transformed by platforms;
2. Industries with non-scalable gatekeepers;
3. Highly fragmented industries: market aggregation through a platform increases efficiencies
and reduces search costs for businesses and individuals looking for goods and services created
by far-flung local producers;
4. Industries characterized by extreme information asymmetries: economic theory suggests that
fair, efficient markets require that all participants have equal access to information about
goods, services, prices, and other crucial variables. However, in many traditional markets, one
set of participants has far better access than others.
Besides, industries that might seem to be susceptible to platform approaches, yet are likely to be
resistant to such disruption, have certain other characteristics. These include the following:
1. Industries with high regulatory control
2. Industries with high failure costs
3. Resource-intensive industries
Sooner or later, most corporations reach a point where their ability to generate growth internally is
dramatically lower than the growth rates expected by the board and CEO and demanded by investors.
70
To some extent, these businesses have been victims of their own successes. They were able to sustain
high growth rates for a long time because they happened to be in high-growth industries. But once
the growth rates of their industries slowed, their business units could no longer deliver the
performance investors had come to take for granted.
Although acquisition plays an important role in any growth strategy, acquisition cannot substitute for
growth. With the support of researchers at Harvard Business School and Insead, and in particular
Professor D. Quinn Mills, a research project titled “The CEO Agenda and Growth” was carries out
in 2006. The group of researchers identified and approached 24 companies that had achieved
significant organic growth and interviewed their CEOs, chief strategists, heads of R&D, CFOs, and
line managers who had delivered material growth to their companies. They asked executives and
managers the same basic question: “Where does your growth come from?”. The answers they
received were characterised by a consistent pattern. All the companies grew by creating what they
called, new growth platforms (NGPs) on which they could build families of products, services, and
businesses and extend their capabilities into multiple new domains. The platforms provided a
framework in which acquisitions served less as a direct driver of growth and more as a way of
acquiring specific capabilities, assets, and market knowledge.
Possibilities for forming new growth platforms arise when forces of change, such as new or
converging technologies, changing regulatory environments, or social pressures, create the
opportunity to satisfy some unmet or latent customer need.
When a corporation identifies a potential NGP, it can assemble the right portfolio of capabilities,
business processes, systems, and assets that are required to deliver products and services that satisfy
these customer needs. Some of the capabilities needed for an NGP come from redeploying the talent
and technology that the company already has. Capabilities can also come from the company’s external
networks through, for example, technology-licensing agreements and strategic partnerships. Once the
company has listed the technologies and other capabilities it can access internally or through its
partners, it needs to consider what capabilities it must obtain through acquisition.
In the early stages, it can be difficult to see a difference between a new product or service and a new
platform. That’s because many new platforms start as product or service ideas. The differences in
managerial mind-sets become clear as the idea develops. Most important, the CEO needs to be an
active participant in the NGP unit’s discussions, not just the person the unit reports to.
71
Although companies differ in specifics, as it resulted from the 2006 research, many of them approach
the challenge of platform focus in remarkably consistent ways. Specifically, they:
1. Put credible chief growth officers in charge: in every successful case observed, the head of
an NGP unit, or chief growth officer (CGO), was a future contender for the CEO position or
a unique senior executive with credibility, organizational skill, and a deep interest in
opportunities beyond the current mix of businesses. These executives typically had a sense of
curiosity, an external focus, and authority to act;
2. Believe that the team is more important than the idea: new ideas are often underdeveloped or
unrecognizable as potential successful businesses. To identify and develop them, it is not
possible to rely on the smarts of a single senior executive, an organized and empowered team
is needed to be put in place. The NGP team should consist of three or four senior executives
who not only possess a thorough understanding of the company’s markets and operations but
who are also entrepreneurial and have experience in building new businesses.
3. Have NGP that are independent and embedded: NGP units are both independent from and
highly dependent on the corporation’s existing businesses, bureaucracy, way of working, and
related norms and rules. They have to be independent because looking for NGP opportunities
requires a longer performance horizon than a typical business unit has and an ability to step
out of an existing business model and culture. While a strong measure of freedom is important,
an NGP unit must be well embedded in the corporation in order to identify and use existing
knowledge, IP, processes, and assets;
4. Guarantee a financial independence: top management needs to ensure that the financing for
an NGP unit is not crowded out by the core business-unit demands;
5. Systematize the NGP creation process: this careful attention to articulating the process of
platform innovation and related activities is important not only for ensuring that NGP creation
becomes a continuous activity but also because it builds companywide commitment to the
very idea of NGPs.
3.5. From the Company Laboratories to the World Stage
Several organizations have developed ongoing crowdsourcing communities that repeatedly collect
ideas for new products and services from a large, dispersed “crowd” of nonexperts (consumers) over
time. It is a way, organizations are now outsourcing their ideation efforts in an attempt to get fresh
72
ideas into their innovation process. The reason why companies are very interested in ongoing
crowdsourcing communities is because consumers presumably have specialized knowledge about
their own problems with existing products, and they are intrinsically motivated to freely contribute
their ideas (von Hippel 2005, Fuller 2010).
There is a large and growing literature in cognitive psychology and creativity taking the position that
past experience is detrimental to future ideation efforts. In particular, experimental research finds that
a pervasive impediment to accessing relevant and diverse knowledge bases is cognitive fixation
(Jansson and Smith 1991, Smith et al. 1993, Ward 1994, Smith 2003, Cardoso and Badke-Schaub
2011) people tend to fixate on the principles and features of prior examples, leading to ideas that are
less original. Fixation means that the entire solution space is not completely explored (i.e., providing
design examples may restrict an individual from seeing other alternatives or better solutions).
Despite its promises, without having a deep comprehension of the nature of the individual’s ideation
efforts in such communities, crowdsourcing outcomes can be very unsatisfying for companies that
engage in it. Thus, previous implemented ideas may influence what individuals believe to be
“acceptable” ideas.
Research generally recognizes that interaction and idea exchange among individuals can facilitate the
retrieval of relevant and diverse knowledge during the idea generation process (Hinsz et al. 1997,
Kohn and Smith 2011). Here, interactions typically involve one-on-one or group discussions and
commenting activity. A fundamental belief in brainstorming is that interacting with diverse others
can stimulate associations in memory that lead to higher quality ideas (Osborn 1953).
In crowdsourcing communities, ideas are shared online among members via reading, voting, and
commenting. Interacting with others via online comments also has been shown to promote active
and critical thinking (Garrison et al. 2001, Schellens and Valcke 2005).
There is a positive relationship between the diversity of an individual’s commenting activity and their
subsequent chances of proposing an idea that the organization finds valuable enough to implement.
During a 2014 McKinsey research, which has been based on the study of more than 300 companies
in three European countries, there have been listed the steps a company should cover to successfully
co-create new services and products:
1. Target the co-creators: the research found that while 90 percent of executives were eager to
get consumers involved in co-creation, only 12 percent of Internet users had actually done so.
In fact, only a quarter of consumers were even aware of the concept, while an additional 5
percent knew about co-creation but not how it actually worked. To overcome this issue, the
best companies parse customer data to actively target co-creators and actively explain how to
73
use their co-creation platform. They segment their audience and tailor marketing promotions
to what appeals to users: for example, games, money, education, or pure peer recognition.
Implicit in this effort is getting to critical mass: without enough people, the chances of co-
creation success drop. An important element of successful recruiting is finding people who
actually like your brand. Finding people on social media who not only “like” your brand but
are also active promoters is a good place to start. In addition, the value benefit can increase.
2. Find the motivation: getting a critical mass of users to do more than drop by and glimpse an
online co-creation platform can be daunting. Having clear navigation and communications is
critical so that potential co-creators know what kind of help the company is looking for. Co-
creation-savvy companies list their needs and organize them by category, mimicking online
co-creation platforms. Understanding and tapping into what motivates co-creators is critical
for getting them to submit good ideas. Not surprisingly, one motivation is compensation.
However, many people aren’t motivated to co-create purely by compensation. The McKinsey
research found that the largest percentage of participants (28 percent) was driven by curiosity
and a desire to learn, followed closely by entertainment and social play (26 percent), and an
interest in building skills (26 percent). Some 20 percent were driven by recognition and
rewards. It’s important to bear in mind the segments of co-creators, such as gamers and social
butterflies, and then design applications with them in mind.
3. Focus on a sustainable playoff: for co-creation to pay off handsomely over time, companies
must focus it on activities that deliver a sustainable competitive advantage. That may be about
being a price leader, a product innovator, or superior service provider.
Involving outsiders in the creative process of developing products and services is harder than it
sounds. Ever since companies began using the web to solicit ideas from outsiders for enhancing
services and developing products, the promise of co-creation has overshadowed its measurable
impact. Studies have shown that the impact of co-creation, the act of bringing external parties, usually
customers or suppliers, into a company’s creative process, on new product innovation is neither
statistically significant nor economically relevant. The main reason of the mediocre results is that
companies and entrepreneurs have rushed to develop and implement crowdsourcing communities,
even though very little is known about their dynamics and effectiveness. However, for companies
that figure out how to do it well, co-creation rewards can be far greater than a more effective and
efficient R&D organization. More important, it is a core capability for unleashing the vast ingenuity
of outsiders on an organization’s biggest challenges.
74
75
Technology Integration
4
76
4.1. Introduction
Linking technology and product development processes efficiently leads to the development of
improved products, which enables companies to stay competitive and to grow. Very often product
development under a technologically changing environment results in failure if conducted in a
random and chaotic style, even in well-established organizations (Tushman, Anderson, 1986).
Developing technologies without having a product in mind, or developing products which require
technologies that are not ready to be integrated in them are delicate phases that have to be managed
with the right process to avoid failure. Daim et al. identified technology integration as a critical factor
in successful product development.
This chapter presents what technology integration means and how important it is for companies to
innovate, a series of the most diffused product development methods and, finally, a list of innovation
strategies, needed to make the technology integration successful.
4.2. Technology Integration
Technology integration is one of the three phases of technology management and it is chronologically
positioned downstream the phases of technology selection and technology development. It is defined
as the approach companies use to choose and refine the technologies employed in a new product,
process, or service (Iansiti, West, 1997).
Technology integration can be considered as a different form of innovation which takes place by
integrating different technologies (Chiesa, 2001). In fact, technology innovation is not only the result
of breakthroughs in one specific field, it can also be achieved putting together pieces of knowledge
from different fields and integrating them in a new way.
Technology integration is particularly critical in industries that are characterized by a high degree of
novelty of the technology and a high complexity of the context. The combination of novelty and
77
complexity makes the company’s ability of mastering the technology integration process an important
source of competitive advantage (Iansiti, 1995).
Industries that share the novelty and complexity factors do not necessarily have other commonalities,
but these two challenges are sufficient to characterize the required approach to technology integration.
Examples of industries in which the technology integration plays an important role are: software and
semiconductor, pharmaceutical, materials science and chemistry.
The technology integration process became relevant for the first time during the 1990s, when it has
been considered the main responsible for the comeback of the U.S. electronics industry (Iansiti, West,
1997). To briefly summarized the historical and the business context in which the technology
integration grew in importance, it is necessary to start reasoning from a forty years ago scenario.
During the 1960s and 1970s, U.S. companies such as IBM, Xerox, and AT&T succeeded by making
breakthrough discoveries in their R&D laboratories and then turning those inventions into
breakthrough products.
At that time, technology integration occurred in the following way: isolated research groups explore
new technologies and choose which ones the development organization would use; the development
organization refines the choice, the new product or process is passed on to a manufacturing
organization, in order to remove the bugs. With such a structured process, it was impossible to take a
view of the entire project when choosing technologies, and consequently, many of the choices were
poor (Iansiti, West, 1997).
In retrospect, it is clear that this way of approaching technology integration presented many problems,
but its limits became clear only during the 1980s, when the competitive landscape changed.
At that time, many elements started to change the competitive scenario, making the old innovation
strategy inadequate. The main elements of change were:
1. The number of technologies among which companies could choose grew dramatically;
2. Advances in many industries generated a change in the technology bases: a deeper a wider
knowledge was required for the management of every product;
3. The sources of new technologies increased tremendously thanks to the number of graduates
that was higher than ever before and thanks to the diffusion of a range of expert suppliers that
were familiar with the latest technology innovations;
4. The shrinking of the product lifecycle that forced companies to perform the development and
commercialization processes at a faster pace;
78
5. In certain industries, like the computer one, the marketplace uncertainty grew rapidly;
6. Customers started to require continuously improved performances;
The above considerations are particularly relevant in industries heavily characterized by technology,
and where the novelty and complexity elements above mentioned, rise quickly: the semiconductor
and software industries.
With respect to these industries, an additional element of change has to be taken into account: the
global market, in particular the semiconductor industry, saw the rise of Asian manufacturers coming
from Japan (Hitachi, NEC and Toshiba) and Korea (Samsung) that were able to gain a substantial
advantage by developing new production technology and investing heavily in technology-integration
and manufacturing capabilities.
In the competitive scenario, the biggest problem facing the big players in the U.S. market was no
more creating novel technologies, developing new products or ideating new production processes,
but choosing among the vast array of technologies.
In fact, thanks to their internal research functions and to external suppliers, the companies were able
to reach numerous technological possibilities, organizations could take advantage of managerial
processes that would ensure speedy implementation once the technological path was laid. However,
in the new scenario, the advantage often goes to the companies that are most adept at choosing among
the vast number of technological options and not necessarily to the companies that create them.
To cope in the emerging context, companies were required to develop additional capabilities and to
generate different strategies to successfully address the new challenge.
The traditional industrial laboratories were not suitable to turn outstanding research into outstanding
products and processes. They had been developed to shield research functions from day-to-day
business pressures so that researchers could focus on creating or discovering important technological
concepts. Consequently, the U.S. companies that prevailed in the computer industry in the 1990s
abandoned the traditional step-by-step R&D model and created a radically new one. They did not
stop conducting basic research, but they did shift much of the focus of their research efforts to applied
science, and they turned to an increasingly diverse base of suppliers and partners, like universities,
consortia, and other companies, to help generate technological possibilities. In addition, they formed
teams of expert integrators, people with extensive backgrounds in research, development, and
manufacturing, to develop new generations of major products and processes.
79
Companies charged the integration teams to take a broad, system-wide outlook and gave them
considerable freedom in conceptualizing the new generation and choosing its technologies. The aim
was for the teams to create a concept of the future product that would fit customers’ requirements and
could be manufactured rapidly and efficiently. The result was an approach to technology integration
that excelled in finding important new technologies that provided extremely successful solutions and
in finding them very quickly and efficiently.
Different methodologies based on the same principles were developed also in Asia, particularly by
Japanese and Korean companies, that, during the 1980s, were able to jeopardize the U.S. leadership
in the industries above mentioned.
All three models resulted in effective integration, but in different ways, meaning that, it is not possible
to consider an approach better than another. An effective organization has to rely on the approach to
technology integration that suits its national culture and assets (Iansiti, West, 1997).
4.3. Developing Future Products and Services
Because of the revolution that hit the R&D world during the 1980s and the 1990s, under the joint
pressure of an increasingly demanding and global market and an accelerating pace of technological
change, both management scholars and practitioners have become aware that in addition to price, in
many cases, companies compete ever more on quality and speed in product development. As a
consequence, numerous product innovation models emerged. Thanks to the awareness of the
importance of technology integration, the validity and helpfulness of these models has been proven
in many cases of traditional innovation. However, none of those methodologies has been specifically
developed to address cases of innovation of meaning, neither a structured model to guide companies
in approaching innovation of meaning is mentioned in the literature.
4.3.1. Introduction to New Product Development
80
In business and engineering, new product development (NPD) is the complete process of bringing a
new product to market. New product development is described in the literature as the transformation
of a market opportunity into a product available for sale and it can be tangible, a physical product, or
intangible, like a service, an experience, or a belief.
Facing increased competition from home and abroad, maturing markets, and the heightened pace of
technological change, in addition to the consideration that all products possess limited life spans, non-
profit executives must continually seek to develop new product offerings that will ensure long-term
growth, prosperity and competitive advantage. These new products, of course, do not automatically
appear in the marketplace. Instead, they result from labour intensive, expensive, and bureaucratic
efforts that eventually lead to market entry. In addition to the effort, expense, and bureaucracy
associated with new product development, companies face another problem. Every time new products
are introduced to the market, they put their reputations in jeopardy. New products that are poorly
developed can be quite damaging to existing offerings, providing an additional incentive for
companies to work diligently to ensure new product success. A study by the Conference Board
(Hopkins 1980) revealed that, by an eight-to-one ratio, CEOs believed that their firms would be much
more dependent on new products in the years ahead. A Coopers & Lybrand survey (1985) reported
that most companies are counting heavily on new product development for growth and profitability.
Although risk is inherent in new product development, it can be lessened by adopting a systematic
framework for managing new product activities.
One of the first developed models that today companies still use in the NPD process is the Booz,
Allen and Hamilton (BAH) Model, published in 1982. This is the most known model because it
underlies the NPD systems that have been put forward later. This model represents the foundation of
all the other models that have been developed afterwards. Significant work has been conducted in
order to propose better models, but in fact these models can be easily linked to BAH model. The BAH
model seven steps are: new product strategy, idea generation, screening and evaluation, business
analysis, development, testing, and commercialization.
81
Booz, Allen, and Hamilton’s New Product Process serves as a useful guide for new product
development. Its seven sequential stages provide invaluable guidance to non-profit executives
seeking to develop new products in a comprehensive and orderly fashion. (Allen & Hamilton, Booz,
1982).
In today’s fast-paced, fiercely competitive world of commercial new product development, speed and
flexibility are essential. Companies are increasingly realizing that the old, sequential approach to
developing new products is not appropriate in the new context, the rules of the game for competing
effectively in today’s world market have changed. To achieve speed and flexibility in developing
products, multinationals must use dynamic processes, involving trial and error phases and learning
by doing.
In the old approach, the product development process moved like a relay race, with one group of
functional specialists passing the baton to the next group. The project went sequentially from phase
to phase. The new approach that many companies are adopting, can be described using the rugby
metaphor. In this approach, the product development process emerges from the constant interaction
of a hand-picked, multidisciplinary team whose members work together from start to finish. Rather
than moving in defined, highly structured stages, the process is born out of the team members’
interplay. (Takeuchi, Nonaka, 1986).
Figure 4.1 – Seven Steps of BAH Model (Booz, Allen, Hamilton, 1982)
82
The rugby approach is essential for companies seeking to develop new products quickly and flexibly.
The shift from a linear to an integrated approach encourages trial and error and challenges the status
quo. It stimulates new kinds of learning and thinking within the organization at different levels and
functions. Just as important, this strategy for product development can act as an agent of change for
the larger organization. The energy and motivation the effort produces can spread throughout the big
company and begin to break down some of the rigidities that have set in over time.
4.3.2. The Stage-Gate
The New Product Development process is often referred to as The Stage-Gate innovation process,
proposed by Dr. Robert G. Cooper, a pioneer of NPD research in the consumers goods sector. Its
development is the result of comprehensive research on reasons why products succeed and why they
fail. The Stage-Gate model, was introduced as a new tool to conceive, develop, and launch new
products.
The formalization of these model for new products development took place during the 1980s, when
many industries, at a global level, turned to a mature life cycle phase and they started to see an
augmented degree of competition. Companies needed to develop new products and new businesses
for sustaining growth and competitive advantage.
One strategic solution to the increasing need for effective product innovation, is that management
must get better at conceiving, developing, and launching new products, not just extensions and
incremental improvements, but new products that give the firm a sustainable competitive advantage.
To obtain this kind of results, companies need to firstly develop a better management of the
innovation process. Stage-gate systems are seen as one answer and many companies are increasingly
adopting alternative forms of this method.
Although individual companies may refer to their systems by different names and they appear to be
unique to that company, there is a surprising parallelism between different stage-gate approaches
employed in different organizations. Stage-gate systems recognize that product innovation is a
process, and like other processes, it can be managed. It simply applies process-management
methodologies to the innovation process (Cooper, 2000).
83
The basic principle of the stage-gate model is dividing the innovation process into a series of steps,
starting from an idea and finishing with the launch of a new product. After the end of one stage and
before the next stage begins, there is a gate, which represents a moment of validation: the work
done in the previous stage must be considered qualitatively coherent with the expectations. The
innovation process cannot continue until the gate’s validation criteria are matched.
A good analogy is the production process to manufacture a physical product. The way to improve the
quality of output from the process, of course, is to focus on the process itself. A process is subdivided
into a number of stages or workstations. Between each work station or stage, there is a quality control
checkpoint or gate. A set of deliverables is specified for each gate, as is a set of quality criteria that
the product must pass before moving to the next work station. The stages are where the work is done;
the gates ensure that the quality is sufficient. Stage-gate systems use similar methods to manage the
innovation process. They divide the innovation process into a predetermined set of stages, themselves
composed of a group of prescribed, related, and often parallel activities. Usually stage-gate systems
involve from four to seven stages and gates, depending on the company or division.
Starting from a “skeleton” model it is possible to develop a custom-tailored model. Very similar
stage-gate approaches have been implemented in a wide variety of industries, including chemicals,
financial services, and consumer nondurables. Here below an example of a typical stage-gate system:
The new product process is initiated by a new product idea, which is submitted to Gate 1, Initial
Screen.
Figure 4.2 – The Stage-Gate Model (Cooper, 2000)
84
1. Stage 1, Scoping: a quick and inexpensive assessment of the technical merits of the project and
its market prospects;
2. Stage 2, building the Business Case: this is the most critical stage, the one that makes or breaks
the project. Technical, marketing and business feasibility are accessed resulting in a business case
which has three main components: product and project definition, project justification and project
plan;
3. Stage 3, Development: plans are translated into concrete deliverables. The actual design and
development of the new product occurs, the manufacturing or operations plan is mapped out, the
marketing launch and operating plans are developed and the test plans for the next stage are
defined;
4. Stage 4, Testing: in this stage the entire project is validated: the product itself, the
production/manufacturing process, customer acceptance, and the economics of the project;
5. Stage 5, Commercialization: this final stage involves implementation of both the marketing
launch plan and the operations plan.
In the stage-gate process companies’ employees can perform three main roles: team members, project
leaders and gatekeepers.
Each project leader is required to provide the specified deliverables and meet the stated criteria at a
given gate. Gates are manned by senior managers who act as "gatekeepers."
This gatekeeping group is typically multidisciplinary and multifunctional, and its members are senior
enough to have the authority to approve the resources needed by the project. The implementation of
stage-gate systems requires certain organizational changes within some firms. Typically, there are
two main requirements that have to be matched to guarantee a successful application of the stage-
gate model. Firstly, a project team approach to organizing new product projects. Projects can no
longer be handed from department to department within the firm, instead, a team and a leader must
carry the project in all stages. A second organizational change for some firms is the involvement of
senior management as gatekeepers. Successful product innovation requires significant resources and
demands the commitment of top management. Gates manned by senior people are not only essential
to gateways systems, they build in top management involvement and commitment.
Stage-gate systems, although simple conceptually, have a profound impact on the innovation process
and the basic benefits of the implementation of this process are evident. Even though not all projects
pass through every stage of the model, the stage-gate provides the quality focus that is often missing
85
in firms' new product programs. It puts discipline into a process that, in too many firms, is ad hoc and
seriously deficient. By building in quality control check-points in the form of gates, stage-gate
systems ensure that project leaders and teams meet high standards of execution. As the project leader
approaches a gate, he or she knows what inputs are required and that these deliverables will be
carefully scrutinized by the gatekeepers. The pressure is very much on the project leader to build
quality into his or her project. Gates ensure that no critical activities have been omitted: an action
plan is agreed upon at each gate, and the deliverables for the next gate are clearly specified. The
process is visible and relatively simple: what is required at each stage and gate is understood by all.
The Stage-Gate provides a roadmap to facilitate the project. Product innovation will always be a high-
risk activity. The stage-gate system is merely a discipline that builds the success ingredients into the
innovation process by design rather than by chance. The results are better decisions, more focus,
fewer failures, and faster developments (Cooper, 1990).
4.3.3. Human-Centered Design
Human-centered design (HCD) is a design and management framework that develops solutions to
problems by involving the human perspective in all steps of the problem-solving process. In fact, the
process starts with the people the company is designing for and ends with new solutions that are tailor
made to suit their needs.
Quoting Joseph Giacomin Director of the HCDI at the Brunel University of London, human-centered
design is more focused on "methodologies and techniques for interacting with people in such a
manner as to facilitate the detection of meanings, desires and needs, either by verbal or non-verbal
means".
Human involvement typically takes place in observing the problem within context, it is all about
building a deep empathy with the people the company is designing for, then the process goes on
generating ideas through brainstorming, building a bunch of prototypes, conceptualizing, sharing the
progress made with the future users, developing, implementing the solution and eventually putting
the innovative new solution out in the world. The human-centered design is defined as innovation
inspired by people.
86
This creative approach to problem solving is the backbone of many design companies, like IDEO.org,
however, it is not limited to the design industry, it is, on the contrary, a process used across industries
and sectors.
In the human-centered design process it is possible to identify three main phases: discover, ideate and
prototype.
The Discover phase starts by getting out into the world and learning from people. The Ideate phase is
about narrowing down what you’ve learned, and translating those learnings into themes and patterns.
This is followed by the Prototype phase, where you will rapidly evolve your ideas into tangible
designs based upon real feedback.
Characteristics of the HCD:
● Empathetic - Human-centered design begins from a deep understanding of the needs and
motivations of people, the parents, neighbours, children, colleagues, and strangers who make
up a community.
● Collaborative - Several great minds are always stronger than just one. Human-centered design
benefits greatly from the views of multiple perspectives, and others’ creativity bolstering your
own. HCD taps into the creative abilities we all have, that typically get overlooked by more
conventional problem-solving practices.
● Optimistic - Human-centered design is the fundamental belief that we can all create change,
no matter how big a problem, how little time, what constraints exist or how small a budget.
Designing can be a powerful process.
● Experimental - Expecting perfection makes it hard to take risks and limits the possibilities to
create more radical change. Human-centered design is all about experimenting and learning
by doing. It gives you the confidence to believe that new, better things are possible and that
you can help make them a reality.
Thanks to its characteristics, the HCD approach can be helpful in tackling challenges of many
different nature, this approach, in fact, has proven to increase effectiveness and efficiency, to
improves human well-being, user satisfaction, accessibility and sustainability, and to counteracts
possible adverse effects of use on human health, safety and performance. This method is suitable for
various project units: products and services, spaces and systems.
87
While developing new products, it is extremely important to take into consideration the impact that a
certain design can have on social innovation. Challenges that arise when there are limited resources
or infrastructure require new approaches and well-designed solutions. With the HCD process these
results are easily achievable. This method is not limited at producing elegant products, it also involves
practical and contingent aspects.
For a service to be effective, it needs to be considered from end-to-end: from the advertisement
campaign to the final delivery. For a service to have the desired impact, it is essential to gain a deep
understanding of the people it will be serving, not only what they need and desire, but what limitations
they face, what motivates them, what is important to them. Being an empathetic process, human-
centered design is a fundamental tool.
Human-centered design can help make the emotional parts of a space as important as the functional.
Physical environments give people signals about how to behave and influence how they feel. By
rethinking the design of hospitals, classrooms, public transportation, banks, libraries, and more,
designers can create new experiences and interaction in these spaces.
Designing systems is about balancing the complexity of many different stakeholder needs with the
needs of the social enterprise. System design often involves setting high-level strategy such as stating
visions, priorities, policies, and key communications around these ideas. HCD is a creative approach
to interactive systems development that aims to make systems usable and useful by focusing on the
users, designing around their needs and requirements at all stages, and by applying human
factors/ergonomics, usability knowledge, and techniques.
Human centred design tools can be classified based on their intended use.
There are tools that define the boundaries within which to operate, and usually act more to inform the
team than to drive the human centred design process. The most basic form of these kind of tool
consists of facts about people such as anthropometric, biomechanical, cognitive, emotional,
psychophysical, psychological and sociological data and models. Such information provides basic
factual statements regarding the abilities and limitations of humans.
Some human centred design tools consist instead of methodologies and techniques for interacting
with people in order to facilitate the detection of meanings, desires and needs, either by verbal or non-
verbal means. The most important tools belonging to this category are: cognitively inspired language-
based techniques such as ethnographic interviews (Spradley, 1979), questionnaires, role playing and
focus groups (Stewart, 2007).
88
However, there is a growing number of methods that are used to investigate those areas of human
mental activity which are not always directly accessible to conscious thought. Participant observation
(Spradley, 1980), body language analysis (Navarro, 2008), facial coding analysis (Hill, 2010),
electroencephalograms (Du Plessis, 2011) and other approaches for measuring and analysing non-
verbal information are being increasingly adopted by marketers and designers.
Finally, a growing set of human centred design tools are used for simulating intuitions, opportunities
and possible futures for purposes of emersion, reflection and discussion. From the currently popular
approach of co-design (Von Hippel, 2005) to the more speculative techniques such as real fictions
and para-functional prototypes (Dunne, 2008), new approaches are being developed and deployed to
immerse people in one or more possible futures, providing opportunities for socially experimenting
the envisaged product, system or service.
4.3.4. Design Thinking
Design thinking refers to creative strategies designers utilize during the process of designing. Design
thinking in business uses the designer’s sensibility and methods to match people’s needs with what
is technologically feasible and what a viable business strategy can convert into customer value and
market opportunity.
The broad meaning that today design thinking has, and the fact that it is used to refer to many fields
that are not directly linked to the pure design activities, is the results of an evolution that took place
throughout almost three decades, from the beginning of the 1970s to the end of the 1990s. The idea
of design as a way of thinking was proposed for the first time in 1969 by Herbert A. Simon. During
the 1980s and the 1990s, at the Stanford University, Rolf Faste started teaching "design thinking as a
method of creative action.” Finally design thinking was adapted for business purposes by David M.
Kelley, who founded the design consultancy IDEO in 1991. In 2006, Kelley founded the Stanford
Design Center, which has become an intellectual center for the design thinking movement.
The just presented long evolution, lasted about 30 years, of the design thinking concept gives a
measure of how sceptical were managers in borrowing a method coming from the design world.
Design has historically occurred fairly far downstream in the development process and has focused
on making new products aesthetically attractive or enhancing brand perception through smart,
89
evocative advertising. For this reason, even though the design thinking method was already present
and utilized by some pioneer company, it has captured much public attention and interest only in the
last decade. Many social service and government organizations are now looking at IDEO’s design
thinking as a path to process innovation in their organizations. Businesses recently understood that
design thinking helps them be more innovative, better differentiate their offering and bring their
products and services to market faster, and finally are embracing this method. Besides, non-profits
are beginning to use design thinking as well to develop better solutions to social problems.
Today, as innovation’s terrain expands to encompass human-centered processes and services as well
as products, companies are asking designers to create ideas rather than to simply dress them up.
The design thinking process can be seen as system of overlapping spaces rather than a sequence of
sequential steps. Particularly, there are three main spaces: inspiration, in which an opportunity is
identified; ideation, in which general solutions are conceived; and implementation that brings the
project into people’s life. These phases may sometimes not follow a strictly sequential order, instead,
the process may loop back to one or more of the phases many times, refining the results coming from
the teamwork during every new iteration. This lack of linearity can make the process a little chaotic
(Brown, 2008).
The classical starting point of the inspiration phase is the brief, a set of mental constraints that gives
the team a framework from which to begin, benchmarks by which they can measure progress and a
set of objectives to be realized. A well-structured brief should not be too abstract or too narrow, it has
Figure 4.3 – Design Thinking Spaces (Brown, 2008)
90
to guide the team without make it stuck. The inspiration phase is also the moment in which the team
collect information about the users through focus groups, surveys, observations and researches.
The ideation phase has, as a main focus, a process of synthesis where the team members condense
what they learnt in the inspiration phase, about the users, their needs and the context. The team
generates insights that can lead to solutions or opportunities for change. During this phase, it is
important to have divergent thinking to not fall into too trivial solutions. To assure this characteristic,
it is important to create a team made of diverse people, with different cultural and professional
backgrounds. Interdisciplinary teams are often able to move into a structured brainstorming process,
analysing one provocative questions at a time and generating hundreds of ideas. The most important
rule during the brainstorming session is to defer judgement: creativity has to be fostered, not stifled.
The third space of the design thinking process is the implementation, when the best ideas generated
during the ideation are turned into concrete, fully conceived action plan. At the core of the
implementation process is prototyping, turning ideas into actual products and services that are then
tested, iterated and refined.
Design thinking is articulated around four principles:
1. Problem Solving attitude - design thinking is a toolkit to approach problems of all scales and
complexity. It has been demonstrated to be particularly helpful in tackling wicked problems.
(Wicked problems are ill-defined or tricky problems, where both the problem and the solution
are unknown. This is as opposed to "tame" or "well-defined" problems where the problem is
clear, and the solution is available through some technical knowledge. A large part of the
problem-solving activity, then, consists of problem definition and problem shaping).
2. Human-centered design - a crucial point of design thinking is the understanding of people’s
needs and possibilities, in order to put the characteristics that really matters into the developed
products or services. Insight into how people actually use things is central to design thinking.
This insight comes not from crunching numbers, but rather from observing what people
actually do, what they do not do, and understanding what they do not or cannot explain about
what they do.
3. Participatory process - it means that users are seen as partners in the development process,
becoming sometimes active co-creators. This process is in opposition to the expert mindset,
in which users are seen as passive subjects. To improve the organization’s design thinking,
managers have to evaluate the kind of thinkers they have, and hire others, add designers or
91
engineers to the team, screen applicants for innovative impulses and diverse experiences,
listen to customers, and approach the company from their perspective.
4. Iterative approach - design thinking follows an iterative path, constituted by the alternation of
a divergent phase, where many possibilities are considered and a convergent phase, where
there is the selection among the previously explored possibilities. Due to its iterative nature,
intermediate "solutions" are also potential starting points of alternative paths, including
redefining of the initial problem, it is a process of co-evolution of problem and solution.
As it is stated in the third principle, all the people involved in the project play a fundamental role in
the design thinking process. A key step to successfully utilize the method is to assemble an
interdisciplinary project team that work collaboratively, but it is important that everyone in the
organization, not just the team in charge, understand the project goals as leaders guide the creation
process.
The IDEO philosophy emphasizes that design is a team sport with three principal values:
1. Many eyes - design teams include diversified expertise such as engineering, human factors,
communication, graphics, ethnography, sociology, and more. Each team member’s unique
perspective helps the other members see things they would not ordinarily see;
2. Customer viewpoint - design teams visit customer places to interview them and watch what
they actually do, including their reactions in extreme “stress” cases. Frequently the solutions
developed are relevant to a unique cultural context and will not necessarily work outside that
specific situation;
3. Tangibility - design teams build prototypes and mock-ups, try them out, and learn from the
feedback and reactions.
A criterion that can be used to guide the team throughout the process, is to analyse the way they
interact and coherently choose the makeup and methods that best promote individual design thinking,
rather than groupthink in order to foster multiple perspectives, quick production and fluid
communication.
It is also important to hold workshops to inspire innovation and introduce specific tools, to reward
risk taking, to encourage designers to mix with the rest of the company as participants are more likely
to generate good new ideas when they are exposed to outside conditions and people in other
92
departments, to support new ideas, to rework the incentive system developing criteria for measuring
the innovation, to not demonize failure and to not overemphasize regulations or efficiency.
One potentially useful approach is the “design challenge”: invite people to solve a specific problem
within a set of constraints, so they can win recognition and financial reward.
The version of the design thinking process proposed by the d.school, Institute of Design at Stanford,
has five stages: empathize, define, ideate, prototype and test. The steps are not linear, they can occur
simultaneously and be repeated.
1. Empathize - The team needs to fully understand the experience of the user for whom it is
designing. To do this observation, interaction, and immersion in their experiences is
fundamental. Design thinking borrows ethnographic observational techniques from
anthropology and reapplies them to generating practical solutions. This requires empathy,
because feeling alongside others allows to move from seeing them just as subjects or
consumers and really start experience things as they do.
2. Define - Process and synthesize the findings from the empathy work in order to form a user
point of view that the team will address with its design.
3. Ideate - Explore a wide variety of possible solutions through generating a large quantity of
diverse possible solutions, allowing the team to step beyond the obvious and explore a range
of ideas. Do not try to create ideas in isolation, in the abstract or by using words alone. Use
instead multiple methods.
4. Prototype - Transform the ideas into a physical form so that it is possible to experience and
interact with them and, in the process, learn and develop more empathy. Prototypes and
drawings help develop ideas faster. Prototypes do not have to be expensive or time-
Figure 4.4 – The Five Stages of the Design Thinking Process (Brown, 2008)
93
consuming. In fact, the opposite is better as what counts at this point is to generate useful
feedback and drive an idea forward. Early in the process, prototypes can be very basic, just
enough to see if something is viable.
5. Test - The team needs to try out high-resolution products and use observations and feedback
to refine prototypes, learn more about the user, and refine its original point of view.
The design thinking process first defines the problem and then implements the solutions, always with
the needs of the user demographic at the core of concept development. At the core of this process is
a bias towards action and creation: by creating and testing something, you can continue to learn and
improve upon your initial ideas.
4.3.5. Lean Start-up
Lean start-up is a methodology for developing businesses and products. Particularly, the method
provides a scientific approach for creating and managing start-ups with the aim of getting a desired
product to customers' hands faster. It aims at shorten product development cycles by adopting a
combination of business-hypothesis-driven experimentation, iterative product releases, and validated
learning. The central hypothesis of the lean start-up methodology is that if start-up companies invest
their time into iteratively building products or services to meet the needs of early customers, they can
reduce the market risks and sidestep the need for large amounts of initial project funding and
expensive product launches and failures. The Lean Start-up method teaches companies how to drive
a start-up, how to steer, when to turn, and when to persevere and grow a business with maximum
acceleration. It is built around five principles:
1. Entrepreneurs are everywhere: there is no need to work in a garage to be in a start-up;
2. Entrepreneurship is management: a start-up is an institution, not just a product, so it requires
management, a new kind of management specifically geared to its context;
3. Validated learning: start-ups exist not to make stuff, make money, or serve customers. They
exist to learn how to build a sustainable business. This learning can be validated scientifically,
by running experiments that allow us to test each element of our vision;
4. Innovation accounting: to improve entrepreneurial outcomes, and to hold entrepreneurs
accountable, we need to focus on the boring stuff: how to measure progress, how to setup
94
milestones, how to prioritize work. This requires a new kind of accounting, specific to start-
ups;
5. Build-Measure-Learn: the fundamental activity of a start-up is to turn ideas into products,
measure how customers respond, and then learn whether to pivot or persevere. All successful
start-up processes should be geared to accelerate that feedback loop.
The lean start-up methodology was first proposed in 2008 by Eric Ries, using his personal experiences
adapting lean management principles to high-tech start-up companies. Ries terms start-ups as "human
institutions designed to create a new product or service under conditions of extreme uncertainty". The
methodology has since been expanded to apply to any individual, team, or company looking to
introduce new products or services into the market. The lean start-up's reputation is due in part to the
success of Ries' book, “The Lean Start-up”, published in September 2011.
Very often start-ups begin with an idea of a product that they think people want. They then spend
months, sometimes years, perfecting that product without ever showing the product, even in a very
rudimentary form, to the prospective customer. They never spoke to potential customers to determine
whether or not the product is interesting for them. Unfortunately, the results of this close process in
many cases, is the start-up failure. When the product is eventually ready to go to the market, customers
ultimately communicate, through their indifference, that they are not interested in the product, it is
too late for the start-up to change the developed concept. Ries concluded that the failures of many
start-ups shared similar origins: "it was working forward from the technology instead of working
backward from the business results you're trying to achieve".
95
Similar to the precepts of lean manufacturing, the
lean start-up methodology tries to eliminate
wasteful practices and increase value-producing
practices during the product development phase.
Start-ups typically cannot afford to have their
entire investment depending upon the success of
one single product launch, so they seek for a safer
chance of success, that does not require large
amounts of outside funding, elaborated business
plans, or the perfect product. A fundamental
feature of the model is that customers feedbacks
are taken into account throughout the whole
product development process, in order to ensures
that the producer does not invest time designing
features or services that consumers do not want.
This is done primarily through two modalities, the use of key performance indicators and a continuous
deployment process, that similarly to continuous delivery, results in a reduction of cycle times.
The continuous deployment process takes the form of the minimum viable product. The minimum
viable product (MVP) is defined as the "version of a new product which allows a team to collect the
maximum amount of validated learning about customers with the least effort" (similar to a pilot
experiment). The goal of an MVP is to test fundamental business hypotheses (or leap-of-faith
assumptions) and to help entrepreneurs begin the learning process as quickly as possible.
The lean start-up methodology proposes that by releasing a minimum viable product that is not yet
finalized, the company can then make use of customer feedback to help further tailor their product to
the specific needs of its customers.
There are four main rules to correctly apply the method:
1. Eliminate uncertainty: start-ups usually take a "just do it" approach that avoids all forms of
management, but the lack of a management process tailored on their needs, has led many of them
to abandon the whole process. Using the Lean Start-up approach, companies can create order not
chaos by providing tools to test a vision continuously. The methodology is not simply about
spending less money, failing fast and failing cheap. It is about putting a structured process around
the development of a product.
Figure 4.5 – The Lean Start-Up Methodology (Ries, 2008)
96
2. Work smarter not harder: a start-up in its early phases is a sort of experiment. If it is successful,
managers can start with a campaign: enlisting early adopters, adding employees to each further
experiment or iteration, and eventually starting to build a product. By the time that product is
ready to be distributed widely, it will already have established customers. It will have solved real
problems and offer detailed specifications for what needs to be built.
3. Develop a MVP: a core component of the Lean Start-up methodology is the build-measure-learn
feedback loop. The first step is figuring out the problem that needs to be solved and then
developing a minimum viable product (MVP) to begin the process of learning as quickly as
possible. Once the MVP is established, a start-up can work on tuning the engine. The main
advantage of building a MVP is that the company does not need to spend months waiting for a
beta product to be launch in order to assess customers reactions and, if needed, change the
company's direction. Instead, entrepreneurs can adapt their plans incrementally.
4. Validate Learning: progress in manufacturing is measured by the production of high quality
goods. The unit of progress for Lean Start-ups is validated learning. It a rigorous method for
demonstrating progress when one is embedded in the soil of extreme uncertainty.
4.3.6. Design Sprint
A Design sprint is a time-constrained, five-phase process that uses design thinking to reduce the risk
when bringing a new product, service or a feature to the market.
The advantage of this process is that it helps the team in clearly defining goals, validating assumptions
and deciding on a product roadmap before one line of code is written.
The design sprints have roots at IDEO and the d.school (Institute of Design at Stanford). It has been
refined by GV, formerly Google Ventures, which is the venture capital investment arm of Alphabet
Inc. and provides seed, venture, and growth stage funding to technology companies. The firm operates
independently from Google and makes financially driven investment decisions.
The sprint is a five-day process designed for answering critical business questions through design,
prototyping, and testing ideas with customers. Working together in a sprint, it is possible to shortcut
the endless-debate cycle and compress months of time into a single week. Instead of waiting to launch
a minimal product to understand if an idea is any good, the team get clear data from a realistic
prototype. Thanks to the sprint it is possible to fast-forward into the future to see the finished product
and the customer reactions, before making any expensive commitments.
97
The design sprint is articulated on a five-day process and it is supposed to take place from Monday
to Friday.
The Design Sprint week:
1. On Monday, the team map out the problem and pick an important place to focus. Monday’s
structured discussions create a path for the sprint week. In the morning, the team has to agree
to a long-term goal. Next, they make a map of the challenge and in the afternoon, the team
ask the experts at the company to share what they know. Finally, a target has to be chosen: an
ambitious but manageable piece of the problem that can be solved in one week.
2. On Tuesday, the team sketch competing solutions on paper. After a full day of understanding
the problem and choosing a target for the sprint, on Tuesday, the team get to focus on
solutions. The day starts with an inspirational phase: a review of existing ideas to remix and
improve. Then, in the afternoon, each person will sketch, following a four-step process that
emphasizes critical thinking.
3. On Wednesday, the team has to make difficult decisions and turn their ideas into a testable
hypothesis. By Wednesday morning, the team will have a stack of solutions. This brings also
a problem: it is impossible to prototype and test them all, the team needs to make a choice. In
the morning, team members have to critique each solution, and decide which ones have the
best chance of achieving the long-term goal. Then, in the afternoon, the winning scenes are
taken from the sketches and they are weaved into a storyboard which serve as a step-by-step
plan for building the prototype.
4. On Thursday, the team create a high-fidelity prototype. A “fake it” philosophy has to be
adopted in order to turn that storyboard into a prototype. A realistic façade is all that is needed
to test with customers: by focusing on the customer-facing surface of the product or service,
the prototype can be realized in just one day. On Thursday, everything should be ready for
Friday’s test by confirming the schedule, reviewing the prototype, and writing an interview
script.
5. On Friday, the team test the idea with real live humans. The sprint began with a big challenge,
an excellent team and not much else, but by Friday, a promising solution has been created and
a realistic prototype has been built. The last step is interviewing customers and learning by
watching them reacting to the prototype. This test makes the entire sprint worthwhile: at the
end of the day, the team know how far they have to go, and they know just what to do next.
98
Two elements are considered fundamental to the successful application of the Design Sprint process:
a certain team composition and a physical space where the team can spend the five days of working.
One of the simplest tricks is, in fact, to dedicate a space with walls, a war room, it always helps the
teams to do a better work. The walls of a war room can extend a team’s memory, providing a canvas
for shared note-taking, and acting as long-term storage for works in progress.
The ideal number of people involved in the sprint is 4-7 people and they include the facilitator,
designer, a decision maker (often a CEO if the company is a start-up), product manager, engineer and
someone from companies’ core business departments (Marketing, Content, Operations, etc.).
Once the company is enacted to run a sprint, the next thing to do is selecting and rallying the sprint
team. It is necessary to assemble a balanced team that can fully commit to the process, and where a
good mix of personalities, skills, and disciplines is assured.
Five roles are found to be absolutely necessary in running a quality sprint:
1. Product chief: though it is often the case, this will not always be the boss/owner/VP of the
product. What is most important is that this person has the most tangible exposure to the
problem the team is trying to solve.
2. Customer rep: if the product chief does not have immersive, daily interaction with the
potential customers, it is fundamental to recruit this role onto the sprint team.
3. Designer: having a designer involved in the sprint process is important because they can
quickly make things look good enough.
4. Engineer: even though it is not necessary to be a technology company to run a sprint, the
majority of prototypes the team will be testing will require some kind of engineering talent.
Figure 4.6 – The Design Sprint Process (Knapp, 2016)
99
The engineer on the team may produce software, hardware, or some other real-world product
prototype.
5. Marketing: the words used to describe and market the product as well as the words within the
product itself are just as important as the form and function of the prototype.
An additional role that can be present in a design sprint is the facilitator: it is important to do not
forget that it will be necessary to book conference rooms, organize lunch, capture notes, set timers,
interview customers, and keep the group on-task. The primary role of the facilitator is to ensure the
team keeps up with the aggressive pace of a 5-day sprint.
4.4. General Purpose Technology
A General Purpose Technology, or GPT, is a set of core technologies that have key functions of
generating and spreading incremental or radical innovations in different fields, activities and sectors.
Such technologies have substantial and pervasive societal and economic effects. A GPT can be a
product, a process or an organisational system.
GPTs can be applied to different markets, they are improved rapidly and form the basis for a wave of
complementary innovations in a number of diverse existing industries, hence sustaining and
enhancing economic growth (Bresnahan and Gambardella, 1998, Gambaredella and Giarratana,
2015).
These characteristics are due to their high level of technological generality, often referred as
generality of purpose, that is, the fact that it performs some generic function that lies at the heart of
very many actual or potential products and production systems. Most GPTs play the role of “enabling
technologies”, opening up new opportunities rather than offering complete final solutions.
The idea that a technology solution can apply across multiple domains date back to Smith (1776), in
the Wealth of Nations, and was further re-examined by Stigler (1951), who referred to it as “general
specialities”. The more recent literature has rather focused on the concept of GPTs, which has
captured the attention of many scholars, policymakers, and executives in the last decades.
In one of their papers in 1995, Bresnahan and Trajtenberg discussed the role of General Purpose
Technologies as “engine of growth”. It is proposed that:
100
“GPT are characterized by pervasiveness, inherent potential for
technical improvements and innovative complementarities.”
(Bresnahan and Trajtenberg, 1995)
In any historical period, it is possible to identify prevalent technologies, structured in a hierarchical
pattern (i.e. as forming a sort of "technological tree"), which in the simplest case, would consist just
of two levels: a handful of "basic" technologies at the top (perhaps just one), and a large number of
product classes or sectors that make use of the former at the bottom. Those at the top are characterized
first of all by their generality of purpose, that is, performing some generic function that is vital to the
functioning of a large segment of existing or potential products and production systems.
A second distinctive characteristic of GPT, is their technological dynamism: continuous innovative
efforts, as well as leaning, increase over time the efficiency with which the generic function is
performed. This may show up as reductions in the price/performance ratio of the products, systems
or components in which the GPT is embodied, or as multidimensional qualitative improvements in
them. As a consequence, the costs of the downstream sectors that use the GPT as inputs are lowered,
they may be able to develop better products, and moreover, further sectors will find it profitable to
adopt the improved GPT, thus expanding the range of applications.
Third, GPT are characterized by the existence of innovative complementarities within every
application sectors, in the sense that technical advances in the GPT make it more profitable for its
users to innovate, and vice versa.
Recognizing the GPTs role as “engine of growth”, researchers have been long interested in the
benefits and impacts of these technological solutions, thus revealing their important role in creating
value at both macroeconomic and microeconomic level.
Summarizing what above said, GPTs are characterised by three main features:
1. Pervasiveness: The GPT should spread to most sectors;
2. Improvement: The GPT should get better over time and, hence, should keep lowering the costs
of its users;
3. Innovation spawning: The GPT should make it easier to invent and produce new products or
processes;
101
Thus, as the GPT evolves and advances, it spreads throughout the economy, and in so doing it brings
about and fosters generalized productivity gains. Innovative complementarities entail the existence
of a non-convexity in the underlying technology (a vertical externality) that magnifies and helps
propagate the effect of innovation in the GPT. The sharing of the GPT among an increasing number
of application sectors represent a second externality (the horizontal one).
For the reasons mentioned above, GPTs can be widely regarded as important driving forces of long-
term growth. However, the exogenous arrival of a new GPT does not immediately translate into a
higher productivity, instead, the initial impact of a GPT on overall productivity growth is typically
minimal, or even negative, generating a slowdown phase. The realization of its eventual potential
may take several decades such that the largest growth effects are quite long delayed (David, 1991).
Helpman and Trajtenberg (1998) argue that the slowdown is caused by an initial lack of
complementary inputs. One of the most relevant GPT features is that new GPTs generate economic
cycles due to the need to initially build “compatible components” before the new GPT can usefully
be applied for productive purposes. In practice, in order to make use of the new technology the
economy must devote substantial resources to the development of these inputs, resulting in a
slowdown. Finally, Helpman and Rangel (1999) argue that the initial slowdown is caused by the rapid
depreciation of skills that are specific to the older technology. A move to the new technology results
in the rapid loss of skills, causing productivity to fall until workers learn the new technology specific
skills
The process of technical advance along a given technological course, will run at some point into steep
diminishing returns, scientific breakthroughs will open up new technological opportunities, and hence
the dominant GPT of the era will be eventually superseded.
“Growth that is driven by General Purpose Technologies is different from
growth driven by incremental innovation. Unlike incremental innovation,
GPTs can trigger an uneven growth trajectory, which starts with a
prolonged slowdown followed by a fast acceleration.”
(Helpman, 2004)
102
Before Helpman, also Bresnahan and Trajtenberg (1995) emphasizes that GPTs are not pervasive
from the outset, but undergo an initial stage of improvement and diffusion.
Every era has been characterised by a different GPT. the list by Lipsey, Carlaw and Bekar (2005),
gives a reasonably concise overview of historical GPTs, however, since their book, more GPTs have
been added for the 21st century. It is possible to notice that, as economy accelerate, also the transition
from one GPT to another tends to shrink.
Domestication of plants 9000-8000 BC
Domestication of animals 8500-7500 BC
Smelting of ore 8000-7000 BC
Wheel 4000-3000 BC
Writing 3400-3200 BC
Bronze 2800 BC
Iron 1200 BC
Water wheel Early middle ages
Three-masted sailing ship 15th century
Printing 16th century
Factory system Late 18th century
Steam engine Late 18th century
Railways Mid 19th century
Iron steamship Mid 19th century
Internal combustion engine Late 19th century
Electricity Late 19th century
Automobile 20th century
Airplane 20th century
Mass production 20th century
Computer 20th century
Lean production 20th century
Internet 20th century
Biotechnology 20th century
103
Business virtualization 21th century
Nanotechnology 21th century
Artificial intelligence 21th century
Figure 4.7 – Historical GPTs
GPT creation is driven by a firm’s ability to explore and recombine knowledge spanning
technological boundaries this broad search strategy in fact allows the simultaneous development of
new problem-solving techniques and the pursuit of diverse objectives. Furthermore, it makes
organizations able to avoid developing core-rigidities and cognitive myopia toward more distant
domains, which tend to favour more specialized inventions.
Developing technologies drawing on diverse knowledge areas may contribute to the creation of GPTs
since this enhances inventions’ technological generality.
R&D members are directly involved in the search process and tasked with understanding, linking,
and applying organizational knowledge (Fleming, 2001). In particular, researches showed that
organizations that are most likely to create inventions that impact diverse sectors are those able to
combine different knowledge areas, which in turn depends on the capabilities of their R&D members
to simultaneously use diverse types of knowledge. On these purpose, they have been increasingly
organized in teams, representing the focal units of the invention process (Perry-Smith and Shalley,
2014). In fact, teams provide organizations with more creative thoughts, a greater extent of knowledge
recombination opportunities, and a faster assessment and selection of those opportunities (Singh and
Fleming, 2010). However, teams are not all alike, meaning that their composition can change an
organization’s learning and recombination opportunities thus altering the impact of a wide search
breath of the invention’s technological generality.
Team characteristics that improve the creation of a GPT are:
1. Size: larger teams have higher levels of knowledge and network resources, absorptive capacity
skills, and expertise better manage the cognitive and managerial complexities;
2. Diversity: it enables more opportunities for creativity and involvement in multicultural
experiences that lead to higher possibilities to acquire and implement different knowledge
resources originating in diverse national domains;
104
3. Location: many organizations, such as IBM, ABB, Royal Dutch Shell, and Unilever, “are
increasingly relying on dispersed R&D teams in order to keep pace with Resource
availability”. Build up a learning network to explore and source knowledge globally, and
create a more diverse knowledge base.
Although there are considerable benefits from a search across several knowledge areas for GPT
development, at some point it leads to diminishing or even negative returns, limiting the creation of
more generic solutions; organizations’ cognitive capabilities to find and create useful knowledge
combinations drastically drops as the probability of working with unfamiliar knowledge domains
increases and reaches the limit of the required absorptive capacity
Managers should be aware of the double-edged sword of a wide search in developing GPTs and
suggest balancing the search efforts toward a wide range of knowledge domains to avoid risking their
ability to gain returns from those efforts. Establishing larger and geographically dispersed teams to
develop novel technologies can reduce the problems associated with broad searches.
4.4.1. General Purpose Technology Licensing
The commercialization and diffusion of complex technologies like GPTs is difficult (Ardito, 2015)
as it is primarily limited by the efforts required to adapt them for diverse industries (Gambardella and
Giarratana, 2013). Therefore, recent literature describes commercialization strategies that launch and
diffuse GPTs in the market. Nevertheless, many firms are currently innovating in what is called, the
“Market for Technology”, where firms sell rights for their intellectual property rather than themselves
directly commercializing products and services based on their knowledge capital.
The dominant business model in market for technologies is based on the idea of developing a
technology for licencing to downstream specialists. These business model is becoming popular also
in commercializing GPTs, because the fact that they are constructed in a way that can be employed
by different potential downstream licensees, make licensing particularly profitable.
Historically, technology licensing has tended to occur across national boundaries and reflected the
geographic limits of the licensing firm’s market reach. Companies issued licenses in foreign countries
because they had no concrete a priori intention of entering them directly, finding it more profitable to
extract rents. However, the technology licensing wave of the 1980s and 1990s took on a different
105
character than this ‘norm’, with firms selling property rights over their ideas to other companies
operating in the same geographic markets and industries.
Many of these new types of licenses were offered by small technology specialist suppliers to much
bigger operating companies that controlled the downstream assets needed for their large-scale
production and commercialization. Unfortunately, when the technology is dedicated to specific
applications, licensing limits the profitability of the innovator in two ways:
1. The rents to the innovating firm were constrained by the success of the downstream
manufacturer as a competitor in its own therapeutic category;
2. Few of these biotech entrepreneurs, who were generally small, inexperienced and specialized,
had sufficient bargaining power in negotiations with the downstream manufacturer.
In practice, licensing makes such technologies available to more firms and even if only one company
obtains a license, the competition to obtain it in the first place disseminates knowledge about the
technology. By virtue of GPTs’ pervasiveness, investments in their development result in spill-overs
to the other sectors of the economy, making it more difficult to control intellectual property rights.
As a result of this vulnerability, many technology-based firms have engaged in business-model
innovation by pursuing strategies in which they invest in technologies with more general
applicability. When a downstream company licence a general technology, it should adapt it into an
application that is relevant to its customer set. It means that it need to invest to create value, thus
committing it into partnerships. Moreover, while the bargaining power of the licensee may continue
to squeeze the innovator’s profits somewhat for each application, the innovating firms can increase
their overall profits by expanding the number of applications to which their technology can be applied.
The innovator focuses on maximizing the number of high-value applications that may involve its
technology, which it can affect by investing in skills, resources and capabilities that tie upstream
technology to insights about the needs of ultimate consumers across a broader front.
Besides technology features, there are also environmental condition that makes the ability to produce
GPTs valuable. In particular, when downstream product markets are fragmented in different
subniches, licensors can issue licenses to other firms that operate in market niches in which they do
not compete directly. Of course, that scenario requires the licensor to develop GPTs that can support
distant applications. Industry homogeneity implies that the licensor may be reluctant to sell its
technology, because doing so would make the licensee a close product competitor. Homogenous
106
markets produce the largest profit dissipation compared with more differentiated product markets, in
which the licensor can license to a firm that operates in a more distant product market.
Thus, the generality of the technology is not a sufficient parameter to make licensing a profitable
business model, there should also be a fragmented market to successfully license an innovation. The
design of GPTs offers a powerful way for firms to decrease risks when they target multiple emerging
markets and are confronted with high uncertainty.
Upstream innovators are now focusing on areas of scientific discovery in efforts to develop
patentable insights that can form the basis of technology licenses for commercialization in market
niches by downstream partners two new types of challenges that are compelling yet a further new
wave of business-model innovation:
1. Proliferation of design and simulation techniques for generating general-purpose
technologies;
2. Absence of predictability as to whether this range of innovations can create what will go on
to be commercial opportunities for downstream licensees.
More than ever before, breakthrough products and services based on technological insights are being
subsequently commercialized via long and costly processes in which companies invest to understand
what the technology can be used for, whether the prospective applications are profitable, and how
they can be most effectively pursued business problems raised by these trends are arising from the
distance between general-purpose scientific technologies and the techniques required for
understanding how to put them into use effectively. Typically, the development of technology -
especially general-purpose technology - requires skills, assets and investments in engineering and
scientific disciplines and knowledge, in research, and the like. Understanding which product or
service might become commercially successful requires sociological and marketing insights,
experimentation with users, and the ability to match needs with technological solutions. Thus, the
capabilities required to be effective in commercialization differ quite significantly from those
required to develop new science.
the types of business-model innovations that technology producers might make to remove
‘bottlenecks’ in the successful commercialization of applications.
Activities that are intended to support the transfer of technology discoveries from the laboratory to
commercial use are: industry liaison groups; supporting meetings involving industry, government,
and industry; establishing and supporting user facilities available to researchers from all sectors;
funding multidisciplinary research teams that include industry and university researchers;
107
encouraging the exchange of researchers between universities and industry; establishing centres
focused on technology research; and engaging with regional, state, and local initiatives.
Once introduced in the market, GPTs foster other technological innovations. Youtie (2008) suggested
that GPTs lead to development of complementary technologies, and Maine and Garnsey (2006)
argued that GPTs require strong alliances with potential customers and partners to obtain
complementary assets and financing. From an innovation management viewpoint, Maine (2014)
encourages firms with potential GPTs to create collaborative environments such as interdisciplinary
groups and boundary spanners. Technology-market matching should be organized to identify the most
promising technology solutions for market applications, or the most appropriate markets for
technologies. To support industry improvement, Novelli (2010) notes that GTs are often accompanied
by large, basic research programs.
Another important aspect that support GPTs development is the involvement of institutions. In
particular, government leadership and funds are often necessary to promote technology transfer
activities to private industry because they accelerate the time required for developing the
infrastructure and technologies industry needs to exploit innovations and discoveries. To encourage
organizations to focus on the most critical conditions for creating GPTs, policymakers should also
promote the creation of ad hoc platforms to exchange knowledge.
As previously said, the GPTs commercialization requires a stable system of rules and standardized
instructions, and such elements constitute technological platforms. A platform is defined as a structure
that has a common core with interdependent modules. Consequently, GPT design can be interpreted
as the design of technological platforms. The design of common cores and interdependent modules
permits addressing several applications, while limiting high redesign costs phases of a technology.
Due to high market uncertainty regarding GPTs, the commercial value of a contest is unknown a
priori, raising issues regarding incentives to offer to solvers; since the final value is unknown, a
reward is difficult to determine. Non-monetary incentives are essential because they attract
unconventional participants. Markets are uncertain during early stages of a GT, and consequently,
generating and selecting ideas for markets is an excessively risky strategy. A winning principle to
choose which application(s) to favour during a competition is relying on ideas that apply to multiple
markets. During ideation projects, ideas are evaluated by estimating the size of the markets and
expected customer value the organization committee asked participants to think outside the box of
108
their daily work, and consider emerging technological opportunities that might fit several markets.
Solvers were not asked to focus on specific markets or technologies.
109
Research Methodology
5
110
5.1. Introduction
This section of the research is dedicated to the research methodology and it answers three main
questions: What is the purpose of the analysis? How was the data collected or generated? And, how
was them analysed?
The chapter’s structure starts from the problem setting and then it goes through all the developing
steps followed in the research process. The outputs of this chapter are to outline the goals of this
dissertation, the questions that characterized this work and to expose the path through which the
analysis has been conducted. A peculiar aspect of this research is that it did not followed a linear path
as it usually happen, but instead, it went through a cycle. The reason behind the circular path relies
on the very nature of the case study selected, which disclosed unsought elements, that eventually
became the focus of the research.
The research steps are reported in the block diagram below:
Figure 5.2 – Methodological Framework
111
5.2. Reviewing the Literature
The literature review carried out during this research work was focused on a research question, and
it was conducted with the intent of trying to identify, appraise, select and synthesize all high-quality
research evidence and arguments relevant to the question.
In particular, it included the current knowledge and substantive findings, as well as theoretical and
methodological contributions to the technology innovation management. During this stage relevant
books, articles, monographs, dissertations, etc. were identified and read.
Considering the vastity of the literature materials available on the innovation management, it was not
necessary to include peer-reviewed journal articles presenting new research.
The process of reviewing the literature has been ongoing during the whole research work and
informed many aspects of the empirical research regarding the case study. In fact, all of the latest
literature was taken into consideration to inform the research project, keeping on scanning the
literature long after a formal literature review were completed, sometimes even adding new literature
fields.
As Cronin, Ryan and Coughlan (2008) say the two most common types of review of the literature
are: systematic and traditional or narrative. Systematic reviews use a more rigorous and well-defined
approach to reviewing the literature. The purpose of a systematic review is to provide as complete a
list as possible of all the published and unpublished studies related to a subject area. Unlike the
systematic review, the narrative one critiques and summarizes a body of literature and, then, draws
conclusions about the topic in question. The body of literature is made up of the relevant studies and
knowledge that address the subject area. Usually, it is selective and there are some criteria to ‘pick
and choose’ the sources which are not necessarily mentioned to the reader. Its main purpose is to
provide the reader with a comprehensive background for understanding current knowledge and
highlighting the significance of new research. In this research have been adopted a traditional
literature review approach.
Searching the literature was made possible thanks to major databases and search engines, by using
the electronic resources made available to the Politecnico di Milano’s students (Scopus), and
searching on the web trustworthy, prestigious, business schools’ reviews (e.g. Harvard Business
Review) and companies’ flagship publications (e.g. McKinsey Quarterly).
Shields and Rangarajan (2013) and Granello (2001) link the activities of doing a literature review
with Benjamin Bloom’s revised taxonomy of the cognitive domain.
112
The first category in Bloom's taxonomy is remembering. During the literature review it was very
important to recognise, retrieve and recollect the relevant theories and statements. This was the stage
during which relevant books, articles, monographs, dissertations, etc. were identified and read.
Bloom’s second category understanding represents the comprehension of the material collected and
read. It was challenging because the literature introduced new terminology, conceptual framework
and methodology never studied before.
In Bloom’s third category applying, it was necessary to make connections between the literature and
the research project, in particular, with the chosen case study. This is always particularly important
because the literature review is to be a chapter in a future empirical study. It was during this phase
that it became clear that, to completely understand the essence of the case study, it would have been
inevitable to enlarge the width of the literature taken into consideration. During the fourth category
in Bloom's taxonomy, analyse, it was possible to separate materials belonging to different literature
streams into parts and figure out how the parts fit together. The analysis of the literature allowed to
develop frameworks for analysis and the ability to see the big picture and know how details from the
literature fit within the big picture.
The Bloom’s fifth category, evaluating, was the one that drove the identification of a meaningful
research question. In fact, it was constituted by the identification of the strengths and weaknesses of
the theories, arguments, methodology and findings of the literature that have been collected and read.
Once the big picture of the present literature was set, it was possible to find blank spaces worth further
investigating.
Finally, engaging in creating, the final category in Bloom's taxonomy, meant to bring creativity to the
process of doing the literature review, in other words, to actually identify an original research
question. This was done through drawing new insights from the literature, like unknown gaps in the
literature or surprising connections.
The graphic below presents the different literature streams that have been investigated during the
research. All the streams have specific topics and topics that link them with all the other streams. The
intricate intertwining of the different streams constitutes the background of knowledge of the research
work.
The latest streams added to the body of literature studied were the General Purpose Technologies and
the licensing. Within the technology management, technology selection has been only partially
considered.
113
5.3. GAP and Research Question
The first step of this research was to acknowledge that even though the body of innovation
management literature has grown considerably over the last 35 years, the recent literature concerning
technology epiphanies, has opened up many new interesting and unexplored perspectives, worth
investigating. In particular, what emerged from some of the most famous literature case studies, is
that the potential embedded within a certain technology, often exceed by far its original purpose.
Practitioners have outlined that, when new technologies are developed, many different opportunities
are offered to the innovator company. However, it is proven that in the majority of the situations,
companies behave myopically. They usually develop new technologies with the aim of substituting
their old ones and fail in completely seize the impact they might have in other contexts or industries
(Verganti, 2009). This behaviour is often due to risk aversion and innovation inertia, but it leads to
negative outcomes. The negative consequences are of direct and indirect nature. From the point of
view of the innovator, to fail in exploiting the whole technology potential leads to high opportunity
cost, because of the lost chance of making revenues from other applications (direct consequence).
From the point of view of the market, the outcome is equally negative, as the unseen opportunity
determine a slowdown in the innovation pace (indirect consequence). The technology epiphany
Figure 5.3 – Literature Background
114
theory, tackles this important question through the concept of innovation of meaning. Technology
epiphanies are defined by Verganti, in 2009, as the unveiling of quiescent meanings hidden in the
technology. The theory pictures design through a new light, it is an innovation enabler, a way through
which companies can unveil the true potential of a technology by innovating meanings.
The theory, that could give an important push to the innovation pace in many different markets, still
lack of an in-depth analysis concerning which are the economical, managerial and contextual
elements underpinning its application, which methodologies foster this type of innovation (Buganza,
2015). In particular, the open question at the beginning of these research paper was: how companies
can understand in which fields their technology might be adopted and how can they steer it, in order
to develop meaningful applications?
This was certainly an interesting, unmapped area of research, however, as already stated, thanks to
the selected case study, new aspects arose, and it was possible to include in the analysis a completely
different literature stream: General Purpose Technologies (or GPTs) and their management. General
Purpose Technologies are a particular set of technologies that can affect the entire economy, having
impact both on markets and social structures. Due to their versatile nature, they are suited to fit almost
any existing field. This characteristic has been named “technology pervasiveness”, for the first time,
by Bresnahan and Trajtenberg in 1996. Another characteristic they highlighted is the so called
“innovation spawning”, which means that thanks to General Purpose Technology it should be easier
to invent new products and processes. In fact, it is common in literature to consider the GPT as the
prime mover from which stems all the innovation complementarities.
From the commercialization point of view, the “market for technologies” stream is the main reference
and the literature presents many licensing cases. Licensing, under the following conditions, is in fact
a winning business model:
1. Market fragmentation: spill-overs in non-homogeneous markets are less likely to decrease the
innovation appropriability;
2. Technology generality of purpose: the technology should support a different range of
applications to be profitably licensed;
In studying this kind of technology, it is not useful to ask how companies can understand in which
fields the technology might be successfully applied, because the obvious answer would be
everywhere. Still, the commercialization aspect remains tricky. Does licensing assure the innovator
an appropriate revenue stream comparing to the effort needed to develop the technology? What are
the incentives a downstream company has, to licence a certain technology?
115
These latter considerations have been the guiding aspects of the main body of the research. There is
just one more, fundamental element, which drove the research, which concern the technology essence.
If the technology epiphany literature stream, were not considering the situation where a General
Purpose Technology needs to be integrated into applications, the General Purpose literature stream
has not considered, until now, the case of digital technologies. The digital revolution which is going
on, since the advent of digital technologies in the last century, is disrupting many sectors. The digital
world has literally changed every aspect of the way any business operates and never before in history
has that change occurred so fast. Being digital requires being opens to re-examining the entire way a
company is doing business and understanding where the new frontiers of value are. Because of their
long-reaching effects, digital technologies require to re-think all the business model that are usually
applied for identifying, developing, and commercialising products, services or even technologies.
In an effort of orienting among these separate but intertwined literature streams, the final aim of this
research work is to answer to the following question:
“How can companies steer a General Purpose Technology to integrate it into meaningful
application fields?”
Through the study of the Watson case, the research has the objective of discover useful managerial
practices that can help companies to fully exploit the potential of an innovative technology. The case
should draw the guidelines for companies that wants to perform a technology steering strategy, with
a General Purpose Technology. Even tough, GPTs allow the generation of many applications, by
definition, it can be challenging for a company to understand what are these applications, what are
the more meaningful ones, in which order they should invest to develop them, to which extent they
would benefit by involving external players in the steering process. The decoding of these aspects,
and of other related topics, will answer the research question.
5.4. Case Study Selection
One of the most prominent advocates of case study research, Robert Yin (2009) defines it as “an
empirical enquiry that investigates a contemporary phenomenon in depth and within its real-life
context, especially when the boundaries between phenomenon and context are not clearly evident”.
116
This research is largely exploratory and descriptive, the main criteria applied for the selection of
relevant case studies included the availability of information.
Even though, the case selection is the primordial task of any empirical research, its importance is well
known, and it embeds evident complexities, the question of case selection has received relatively little
attention from scholars since the pioneering work of Eckstein (1975), Lijphart (1971, 1975), and
Przeworski and Teune (1970). In the absence of detailed, formal treatments, scholars continue to lean
primarily on pragmatic considerations such as time, money, expertise, and access.
Consider that most case studies seek to elucidate the features of a broader population. (Gerring 2004).
Multiple cases are suggested to increase the methodological rigor of the study
through "strengthening the precision, the validity and stability of the findings," (Miles and Huberman,
1994), particularly, because "evidence from multiple cases is often considered more compelling (Yin,
1994). However, the value of single case research is methodologically viable in the study of extreme,
critical or very extensive cases, in fact, single case research is
known for its descriptive power and attention to context.
Methodological guidelines for case selection differ between single and multiple case designs.
Selection strategies for single and multiple case designs (Yin, 1994):
Figure 5.4 – Case Selection Strategies (Yin, 1994)
117
Given that the original purpose of this work was to drive the innovation literature, with the aim of
enlarging the present knowledge base related to technology development and integration, the nature
of the research appeared to be exploratory.
At the beginning the best research method seemed to be the analysis of many case studies in order to
find a common denominator among the strategies the different companies adopted in identifying the
possible applications they could develop with their technology.
Starting to look for the suitable cases for this kind of research, the criteria taken was to identify those
technologies whose development has brought to the commercialisation of more than one application,
in not-related markets. However, among the different cases that came up during the first screening,
there were one case, IBM Watson, that was particularly interesting and immediately appeared to have
a greater weight. After a deeper analysis of the just mentioned case, it was clear that, because of the
vastity of the topics it included, the availability of the information, articles and reports and its
uniqueness, it was sufficient to become a single case research. It is in fact widely accepted that the
number of cases can be determined in a trade-off between the breadth and depth of the case study
inquiry. In-depth information is required for a small number of cases while less depth when the
number of cases increases. Furthermore, a single case study research is deemed suitable because the
proposed research addresses the contemporary phenomenon, largely unpredictable.
Of course, the goal of every author is to write a research that readers find convincing. Unfortunately,
case researches are usually perceived by the audience, as very specific cases with a low probability
of occurrence and unlikely generalizable. However, as Nicolaj Siggelkow states in 2007, in its paper
“Persuasion with case studies”, even a single case can be a powerful example. In particular, he argues
that, sometimes, the accuse of biased case selection is unfounded as a case might be chosen, precisely,
because of some unique aspect it represents. It is true that a randomly chosen case would not have
led to the same conclusions, however, choosing a special case is sometimes the best option to gain
certain insights. The price to pay when studying a special organisation, is that researchers should be
careful with the conclusions they draw. To be a valuable case, the case should present insights that
can draw inferences about more normal firms, otherwise the interest is very limited (Siggelkow,
2007). Siggelkow continues by noting that case can help sharpen existing theories by pointing to gaps
and beginning to fill them, which is the purpose of this research.
In accordance with the case studies’ operational definition of Patton (1990), the selected case can be
classified as a mix of extreme case (unusual manifestation of the phenomenon) and intensity case
(information rich). In addition to the intrinsic value of the case identified and its unusual availability
118
of information, it emerged the possibility of leveraging personal network in order to deal with some
of the company’s employees to get fresh, undisclosed and focused materials to deepen the analysis.
Given the dangers of selection bias introduced whenever a case study is chosen in a purposive fashion,
it is fundamental to then carry out the analysis of the case with an open-minded approach, which
enables to uncover unexpected turns and features. Thus, even if cases are initially chosen for
pragmatic reasons, it is essential that researchers understand retroactively how the properties of the
selected cases comport with the rest of the population.
Here below a list of the case study characteristics that made it very suitable for this research work:
1. Big company, relevant for economy, impactful;
2. Ongoing project;
3. New technology, drawing much interest in a high number of sectors, both B2B and B2C, all
around the world;
4. Technology still not entirely commercialized;
5. High availability of official and unofficial information;
6. Possibility of accessing primary sources through interviews with employees;
5.5. Data Gathering
Posing, and answering, a good research question presupposes a certain amount of knowledge about
the topic. During the firsts phases of the research, it can be useful to look for general sources such as
subject-area encyclopaedias and dictionaries, in order to familiarize with the identified topics and
avoid biases. Once the basic knowledge has been acquired, the research becomes more focused and,
accordingly to the type and depth of the research, it is possible to choose primary or secondary
sources, or a mix of the two.
A primary source provides direct or first-hand evidence about an event, object, person, or work of
art. Primary sources provide the original materials on which other research is based and enable
researchers to get as close as possible to what actually happened during a particular event or time
period. Published materials can be viewed as primary resources if they come from the time period
that is being discussed, and were written or produced by someone with first-hand experience of the
event. Often primary sources reflect the individual viewpoint of a participant or observer.
119
Secondary sources describe, discuss, interpret, comment upon, analyse, evaluate, summarize, and
process primary sources. A secondary source is generally one or more steps removed from the event
or time period and are written or produced after the fact with the benefit of hindsight. Secondary
sources often lack the freshness and immediacy of the original material. On occasion, secondary
sources will collect, organize, and repackage primary source information to increase usability and
speed of delivery, such as an online encyclopaedia.
For this research work both sources’ typologies have been used, however, secondary sources
constitute the bigger portion.
Another important aspect while gathering data for a research work is the sources credibility: when a
writer uses a book or published article as a source in a research paper, there are not many questions
to ask about the credibility of that source. Many editors have gone through the evaluation process
before publication. Using books and the library databases as first line of research options is a good
strategy. Unfortunately, not all the needed information is available on the libraries shelfs, in particular,
when the research deal with contemporary, on-going case studies. In these cases, searching through
the web is an inevitable source of knowledge. The web, however, is different from the data sources
above mentioned. Anyone can put any information on the web, and sometimes information looks
more credible at first glance than it is on closer inspection. This is especially true of sources with no
author or organizational affiliation. For these reasons, it is necessary to pay extra attention while
dealing with web-based information. Sources like Wikipedia, as well as Google, Bing, Yahoo and
other public search engines, can be helpful in guiding the research during the gathering of ideas about
a subject, but it is important to remember that the information is unsubstantiated and in some cases,
inaccurate. A smart way to use public search engines, is to go through the sources of the page of
interest. From those it is possible to get to some credible source and validate, contextualise or deepen
the information available on the page.
The choice of method to adopt while collecting data is influenced by the strategy, the type of variable,
the accuracy required, the collection point and the researchers’ skills. There are six main techniques
for collecting data during a research work:
1. Interviews: interviews can be conducted in person or over the telephone, they can be done
formally (structured), semi-structured, or informally. Questions should be focused, clear, and
encourage open-ended responses. Usually, interviews are mainly qualitative in nature;
120
2. Questionnaires and surveys: responses can be analysed with quantitative methods by
assigning numerical values, even though results are generally easier (than qualitative
techniques) to analyse;
3. Observations: allows for the study of the dynamics of a situation, frequency counts of target
behaviours, or other behaviours as indicated by needs of the evaluation. Good source for
providing additional information about a particular group can use video to provide precise
documentation. They can produce qualitative (e.g., narrative data) and quantitative data (e.g.,
frequency counts, mean length of interactions, and instructional time);
4. Focus groups: a facilitated group interview with individuals that have something in common-
They gather information about combined perspectives and opinions, and the responses are
often coded into categories and analysed thematically;
5. Ethnographies/case studies: involves studying a single phenomenon, examines people in their
natural settings, uses a combination of techniques such as observation, interviews, and
surveys. Ethnography is a more holistic approach to evaluation, and researcher can become a
confounding variable;
6. Documents and records: consists of examining existing data in the form of databases, meeting
minutes, reports, attendance logs, financial records, newsletters, etc., This can be an
inexpensive way to gather information but may be an incomplete data source;
The majority of data used in this research comes from secondary sources, mainly gathered online:
almost 300 websites have been consulted. In the chart below there is the detail of the sources typology:
Figure 5.5 – Sources for Watson Chronicle Evolution
121
Even though, data gathered for writing this paper was performed mainly through case studies and
documents (secondary sources), also a few interviews (primary sources) were conducted. In
particular, four different types of interviews, with similar structures, were performed, with different
aims and contributions to the research output. In fact, when planning to carry out interviews as part
of a research project, the first things to consider are who will be interviewed, what kind of information
it is important to obtain, and the type of interview that will better help the objectives.
Here below, the list of the four interviewees, in chronological order:
1. Naiara Altuna: Digital strategy Consultant & Project Manager at IBM. Formerly lecturer and
teaching assistant at Politecnico di Milano. This first interview has been the less formal due
to previous relationship between the interviewee and the Politecnico di Milano, and it had
mainly an explorative aim. The objective was to get insights that could give a direction to the
research;
2. Nicola Palazzo: Financial Service & Watson Leader Italia. This has been a preliminary
interview, none of the topics were addressed with specific questions. The objective was not
to get an in-depth knowledge concerning a single aspect of the case study, but instead, to get
the big picture about the company’ strategy and its main milestones;
3. Roberto Villa: Manager of Human Centric practice and Research Ecosystem at IBM. Through
this interview, it was possible to dig into one of the most interesting aspect of the case study.
Through this in-depth analysis, it was possible to extract many insights concerning the
operative aspects and dynamics of both internal and external actors. It was also possible to
identify some of the most important case study’s enabling factors. Thanks to this interview,
the research found new valuable paths to follow and delve into.
4. Philipp Gerbert: Senior Partner and leads Digital Strategy within BCG. His specific focus is
the impact of artificial intelligence on business. Expert in innovation-driven transformation.
This interview had a more general character, detaching from the case study analysis and
focusing on business and managerial challenges companies have to face while dealing with
artificial intelligence.
The duration of the interviews varied from the shortest, around 15 minutes, and the longest, almost
one hour.
Interviews structures can be classified in three main typologies, unstructured, semi-structured and
structured. The typology of interview adopted during this research was mainly semi-structured, which
presents the following characteristics: the interviewer has a list of questions or key points to be
covered and works through them in a methodical manner. Similar questions are asked of each
122
interviewee, although supplementary questions can be asked as appropriate. The interviewee can
respond how they like and does not have to 'tick a box' with their answer.
During the interviews carried out, however, even though there was a list of questions to help guiding
the conversation, all the interviewees were let free to speak and, starting from a certain topic, they
could make relevant connections to other topics, resulting sometimes in additional questions. The
attitude was mainly friendly and unformal.
Regarding the locations of the interviews, the first one, which, as already said, was unformal, has
been a Skype call. There was the need of obtaining some information in the short term in order to
identify an interesting research direction and to start looking for the right materials. Given the
informality and the urgency of the interview, a Skype call was the best option. The other two
interviews with IBMers were carried out in person, in the IBM offices in Milan. The interviews were
private, held in meeting rooms or personal offices and few people were present. The last interview,
was also unformal, but for a different reason comparing with the one to Naiara Altuna. It has been
made during a TED conference held in Milan, in which the interviewee was one of the speakers.
Taking advantage of one of the many coffee breaks, it was possible to approach him and ask him few
questions regarding the topic of interest for this research work. Given the time constraints and the
presence of other people listening to the conversation, it was not possible to ask precise, case-related
questions, however, at the time of the interview, the research body was already well defined, and it
was easy to identify the relevant questions to ask.
Due to the nature of the sources adopted in the analysis of the case study, the results might be partials.
Therefore, certain figures may not correspond entirely with those displayed in other reports.
5.6. Data Analysis
Data analysis is the process of systematically applying statistical and/or logical techniques to describe
and illustrate, condense and recap, and evaluate data. Data analysis has multiple facets and
approaches, encompassing diverse techniques under a variety of names, in different business, science,
and social science domains. Once the data have been collected through the different sources above
mentioned, it was necessary to organise them, give them a framework to easily extract information.
The type of analysis that it is possible to carry out depends on the data set available for the analysis
itself.
123
The first thing to do before starting the analysis is to acquire some sort of “data-literacy”, which is
the ability to consume for knowledge, produce coherently and think critically about data. To do that,
it is necessary to put data in tidy and organised frameworks which help reasoning.
In order to classify the data collected during the research, it can be necessary a data cleaning. When
a well-structured database is not available, data are often messy, disorganised, uncategorised, jumbled
and knotted, they may be incomplete, contain duplicates, or contain errors. Messy data may hide
harmful information. It is fundamental to make sure that all the information contained in data sets are
clearly named, describable and recognisable, because, otherwise, they might lead to improper
assumptions about the risks that that data might pose. Data cleaning is the process of preventing and
correcting these errors.
In the graphic below, is presented the framework used for data analysis in this research:
1. Data collection: as already said, data used in this research work did not come from any
company or public database. All the data were individually collected from different web-
based, verified sources. A data was considered usable when it was dip into a clearly state
context, where actors were easily identifiable and the timing to which the data was referring,
was specified. During the data collection phase, data have been simply organised in a
Figure 5.5 – Data Analysis Framework for This Research
124
chronological manner using a Word document, in order to be able to add new data to the set
in a fast, untroubled and coherent way;
2. Data merge: during this phase, data from multiple differently formatted sources was converted
and merged into a common database, filling pre-established categories;
3. Data transformation: data is combined, separated or modified to ensure that the same type of
data exists in each category. It is smart to adopt common standards and file formats for the
data, so that they can be more easily shareable, more resilient (future-proof) and also
comparable with other data sets. A bias is the tendency of results to favour a certain outcome,
due to the implicit construction or logic of the collection or processing of the data, the way
that the data was collected. All the data collected during a research contain a certain amount
of bias, for this reason, it is necessary to analyse what those biases might be, to minimise them
to the extent possible, identify the ones that cannot be removed, and make sure that persistent
biases are well known and explicitly flagged throughout the research;
4. Data selection: very often data set contains outliers, data points that are so different from all
the others that it really skews the results. These can be anomalies or just errors in data entry.
In this research, as the data are derived from a sample population, there might be cases that
are way off the charts, in such a way that it is not possible to generalise the results. Removing
such data actually makes the remaining data more meaningful (and less noisy), and provides
a more concrete and realistic data set;
5. Data set identification and enrichment: once the data are cleaned up, it is possible to identify
the only data relevant for the study. Frequently, during this phase, it is acknowledged that
some data is missing. Whenever it is possible, it is important to try to complete the identified
data set, rebuilding the missing data or looking for them through different sources than the
one already investigated. During this research it was very useful to look at an organised data
set to identify new opportunities for deepen the research. It is sometimes easier to find specific
data during web-based research due to the dynamics of search engines (the most precise the
request, the most coherent the results);
6. Report: adequate reporting of research about language learning involves careful consideration of the
logic, rationale, and actions underlying the study. Depending on the particular approach to
quantitative research, distinct categorization, ordering, and summarization or expansion
of detail will be required in order to answer the research questions, support conclusions,
make studies amenable to critical evaluation, and contribute otherwise to the accumulation
of credible knowledge about the topic. The data were plotted in tables, timelines and grids.
125
Empirical Setting
Watson Overview:
The Concept and the Functionalities
6
126
6.1. Introduction
The body of this research is articulated around a case study, which, through its mechanisms, can help
unveil the intricate dynamics and rule that companies nowadays has to follow while launching a new,
disruptive General Purpose Technology.
As anticipated early in the paper, the chosen case study is IBM Watson. This case has many
peculiarities that made it the perfect candidate given the research purpose, and actually, it even
enlarged the initial purpose itself thanks to its unexpected features and tendencies. The analysis and
the data collection has been ongoing for the whole time of the research, because IBM Watson is
continuously growing, updating, expanding, changing and improving. The fact that the case
development is happening at the time of the research make it extremely contemporary and also a little
unpredictable. What balanced the unpredictability and contingency of the case, is the reliability and
solidity of the company that is orchestrating the innovation process: IBM. As Nicola Palazzo stated
during the interview, IBM’s intellectual capital portfolio is immense. IBM holds more patents than
any other company in the world and each year extends its lead.
Here below, a brief presentation of IBM, even though, thanks to its widespread reputation, it is not
completely needed.
IBM, which stays for International Business Machines Corporation, is an American multinational
technology company headquartered in Armonk, New York, United States, with operations in over
170 countries. The company originated in 1911.
Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and
one of the world's largest employers, with (as of 2016) nearly 380,000 employees. Known as
"IBMers", IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National
Medals of Technology and five National Medals of Science.
IBM manufactures and markets computer hardware, middleware and software, and offers hosting and
consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a
major research organization, holding the record for most patents generated by a business (as of 2016)
for 23 consecutive years.
127
6.2. IBM Watson Concept
IBM Watson is an efficient analytical engine that pulls
many sources of data together in real-time, discovers an
insight, and deciphers a degree of confidence. Nicola
Palazzo describes it as a technology platform that
understands all forms of data and reasons and learns at
scale. It uses natural language processing and machine
learning to reveal insights from large amounts of
unstructured data. These characteristics are broadly
identified with the name of cognitive computing and IBM
Watson is considered to be the enabling technology to build
a cognitive business.
“The idea is to make Watson the operating system of the cognitive era,” says Mike Rhodin, senior
vice president of IBM’s software solutions group. Computing systems of the past have operated in
predictable environments, using structured and uniform data to perform prescribed operations. These
computing systems are not able to cope in the current context, where the amounts of variables and
data to consider is huge and where the behaviour of such variables is no more easily predictable, or
sometimes, even traceable.
The new cognitive computing systems are much more adaptable and, similarly to a human brain, they
can understand reasons and learn. These systems can read a vast amount of unstructured data and use
them to spot connections and patterns in new ways, offering insight into all kinds of human
expressions. Nicola Palazzo, during the interview, used a powerful example to explain the
unprecedent capabilities of Watson: for a doctor, to recognize the symptoms of a certain disease, she
has to have read, or studied it. Unfortunately, not all the diseases are common or well-documented,
and doctors cannot stay always updated with every new published paper about rare diseases, new
treatments and so on and so forth. Luckily, Watson, with its powerful elaboration capabilities can,
and it can support doctors throughout the whole diagnosis and prescription phases. Nicola said:
“Watson has the ability to quickly go through a large number of unstructured information, of whatever
the type it is, and mine knowledge from it, identifying models and correlation among the sources. It
is capable of processing 4 billion of pages per minute.”
Cognitive systems are built on several underlying technologies, among whom, the most important are
natural language processing and machine learning.
Figure 6.1 – Watson Logo
128
Natural language processing starts with understanding: the system needs to understand human
language in context. It pulls information from articles, researches, emails, tweets and even images
and sounds, to identify the significant grammar, context and vocabulary that carry core meanings.
Machine learning is a technique in which a machine, spots connections between a particular pattern
and the most likely outcomes. The system receives feedbacks from regular use, so it learns from every
interaction. Every prediction it makes, whether right or wrong, is taken into account for the next
prediction, until it can reliably spot the meaningful patterns throughout the noise.
With these technologies, a cognitive system does not offer one definitive answer, instead, it is
designed to weight the information and ideas from different sources, to reason and then to offer his
conclusion. This makes it more than just a tool, but a trusted advisor.
The image below helps to quickly contextualise cognitive systems, considering that they are the
results of an evolution over time of different technologies, that, considered as a whole, are now the
closest application to proper artificial intelligence. Since an early flush of optimism in the 1950’s,
small subsets of artificial intelligence, first machine learning, then deep learning, have created ever
larger disruption.
In the same way as, at a macro-level, the artificial intelligence is evolving, to become what Watson
is today, it had to go through a series of intermediate steps. Looking at IBM history, Watson is not
Figure 6.6 – Technology Evolution (Ortega, 2017)
129
the company first attempt to develop an artificial intelligence. On May 11, 1997, an IBM computer
called Deep Blue defeated the reigning world chess champion, Garry Kasparov, capturing the
attention and imagination of the world. The six-game match lasted several days and ended with two
wins for IBM, one for the champion and three draws. Over the last 20 years, IBM has worked to
advance the field of AI. Deep Blue used algorithms to explore up to 200 million possible chess
positions per second, then chose the move with the highest likelihood of success. While Deep Blue
did use machine learning approaches, it relied primarily on a programmed understanding of the game
of chess – 64 squares, 32 pieces and well-defined moves and goals. Fourteen years after Deep Blue’s
win, IBM applied AI to the more dynamic real-world challenge of Jeopardy!. This, represented a
huge step forward from playing chess, given the quiz show could cover questions on just about
anything. IBM Watson incorporated facets of AI, machine learning, deep question answering and
natural language processing to play and, ultimately, bested the game’s greatest human champions.
Sthephen L. Baker, an American journalist, tell in his book Final Jeopardy: Man vs. Machine and
the Quest to know everything (2011) how the idea raised and how it made its way to the IBM
laboratories:
“Since Deep Blue’s victory over Garry Kasparov in chess in 1997, IBM had been on the hunt for a
new challenge. In 2004, IBM Research manager Charles Lickel, over dinner with co-workers, noticed
that the restaurant they were in had fallen silent. He soon discovered the cause of this evening hiatus:
Ken Jennings, who was then in the middle of his successful 74-game run on Jeopardy!. Nearly the
entire restaurant had piled toward the televisions, mid-meal, to watch the phenomenon. Intrigued by
the quiz show as a possible challenge for IBM, Lickel passed the idea on, and in 2005, IBM Research
executive Paul Horn backed Lickel up, pushing for someone in his department to take up the challenge
of playing Jeopardy! with an IBM system. Though he initially had trouble finding any research staff
willing to take on what looked to be a much more complex challenge than the wordless game of chess,
eventually David Ferrucci took him up on the offer.”
It took three years of intense research and development, by a core team of about 20 researchers, to
implement IBM Watson. In the early stages of the research, at least until 2006, the team efforts failed
to produce promising results, and consequently, to have significant impact on Jeopardy!.
To progress with the results, the research team ended up overhauling nearly all of its dynamics,
including both internal and external activities.
From the team perspective, the basic technical approach, the underlying architecture, metrics,
evaluation protocols, engineering practices, and even how they worked together as a team have been
deeply reshaped.
130
In particular, by the end of 2007 the team had gradually took a set of promising choices: they adopted
the DeepQA architecture, they had all moved out of their private offices and went into a so called
“war room” setting to dramatically facilitate team communication and tight collaboration. They
instituted a host of disciplined engineering and experimental methodologies, supported by metrics
and tools to ensure they were investing in techniques that promised significant impact on end-to-end
metrics. Since then the performance progress has been incremental but steady.
On the external side they started a cooperation with CMU (Carnegie Mellon University), and began
the Open Advancement of Question Answering (OAQA) initiative. By the end of 2008 the team were
performing reasonably well: about 70 percent precision at 70 percent attempted over the 12,000
question blind data, but it was taking 2 hours to answer a single question on a single CPU. Over the
course of the project the team continued to conduct empirical studies designed to balance speed,
recall, and precision. A variety of search techniques are used, as well as many algorithms are
developed. The impact of any algorithm on end-to-end performance changed over time as other
techniques were added and had overlapping effects. The team commitment to regularly evaluate the
effects of specific techniques on end-to-end performance, and to let that shape the research
investment, was necessary for obtaining rapid progress. Rapid experimentation was another critical
ingredient to achieve success. The team conducted more than 5500 independent experiments in 3
years, each averaging about 2000 CPU hours and generating more than 10 GB of error-analysis data.
To achieve those tremendous results, the team have leveraged the collaboration with CMU and with
other university partnerships, whose role was fundamental during the entire technology development.
Finally, in February 2011, super computer Watson came away victorious during Jeopardy!, winning
with a commanding lead of $77,147 after three days of play.
At the beginning of the research, Watson was still a small project and thoughts of commercialisation
were not uppermost in anyone's mind: the Grand Challenge, how IBMers used to call it, was a
demonstration project, whose return for the company was more in the buzz it created than in a
contribution to the bottom line. Commercialisation happened, in some way, unexpectedly, as Nicola
Palazzo stated during the interview.
IBM Watson concept was born in the Thomas J. Watson Research Center. The Center was established
in 1961, and it is the headquarters for IBM Research. This center has been the location of some of the
most notable technological and scientific business breakthroughs of the 20th and 21st centuries.
Among all the IBMers who took part in the technology development, the commitment, the trust and
the complete dedication of seven people played a fundamental role in the process of giving birth to
Watson and bringing it from the company’s laboratories to the world’s stage. All these seven people,
hold a high-level position within the company and has the power of influence and engage a large
131
number of employees. Below the list of the six IBMers who has the biggest responsibility for having
triggered the innovation process:
1. Charles Lickel: Corporate Vice President of SW Research, he leads a world-wide team of 1000+
software researchers for IBM, and he was responsible for moving IBM to social software, Big
Data, Stream Computing, and particularly, in 2004 he raised the idea of Watson Jeopardy Grand
Challenge;
2. Paul Horn: IBM Senior Vice President and director of research, he was the first one to back
Lickel up and accept the Jeopardy! Challenge;
3. David Ferrucci: he is an IBM Senior Manager and part of the research staff. He leads the Semantic
Analysis and Integration department at the IBM T. J. Watson Research Center, Hawthorne, New
York. Ferrucci is the principal investigator for the DeepQA/Watson project. Ferrucci’s
background is in artificial intelligence and software engineering. He has been the leader of the
IBM Watson Research Team;
4. Guruduth Banavar: one of the most important people in the IBM Watson development is
Guruduth Banavar, Vice President and Chief Science Officer for cognitive computing at IBM. He
is responsible for advancing the next generation of cognitive technologies and solutions with
IBM's global scientific ecosystem, including academia, government agencies and other partners.
In particular, he led the team responsible for creating new cognitive systems in the family of IBM
Watson;
5. Sam Palmisano: President and CEO of IBM from 2003 to 2011. He also served as Chairman of
the company until October 1, 2012. His goal was to re-establish IBM as a standard-setting
company. He was influenced by the Watsons, the company founders who "always defined I.B.M.
as a company that did more than sell computers; they believed that it had an important role to play
in solving societal challenges";
6. Ginni Rometty: She is the current chairwoman, President and CEO of IBM, and the first woman
to head the company. One of her goals is to focus company efforts on the cloud and cognitive
computing systems, such as Watson;
7. Cathy Lasser: Vice President and CTO for the Distribution Sector of the IBM Sales & Distribution
division. Her role was to leverage IBM’s technical community to bring value to clients in the
retail, consumer packaged goods and travel and transportation industries;
8. Manoj Saxena: First GM of IBM Watson. He was the unit's employee number one. Within three
months, he had been joined by 107 new Watson staffers, mostly technologists in the fields of
natural language processing and machine learning.
132
Other people whose contribution has deeply impacted the technology development are a mix or IBM
researchers and other professional coming from partner universities. The team responsible for the
work described in this paper is listed below.
From IBM: Andy Aaron, Einat Amitay, Branimir Boguraev, David Carmel, Arthur Ciccolo, Jaroslaw
Cwiklik, Pablo Duboue, Edward Epstein, Raul Fernandez, Radu Florian, Dan Gruhl, Tong-Haing Fin,
Achille Fokoue, Karen Ingraffea, Bhavani Iyer, Hiroshi Kanayama, Jon Lenchner, Anthony Levas,
Burn Lewis, Michael McCord, Paul Morarescu, Matthew Mulholland, Yuan Ni, Miroslav Novak,
Yue Pan, Siddharth Patwardhan, Zhao Ming Qiu, Salim Roukos, Marshall Schor, Dafna Sheinwald,
Roberto Sicconi, Hiroshi Kanayama, Kohichi Takeda, Gerry Tesauro, Chen Wang, Wlodek
Zadrozny, and Lei Zhang.
From the academic partners: Manas Pathak (CMU), Chang Wang (University of Massachusetts
[UMass]), Hideki Shima (CMU), James Allen (UMass), Ed Hovy (University of Southern
California/Information Sciences Instutute), Bruce Porter (University of Texas), Pallika Kanani
(UMass), Boris Katz (Massachusetts Institute of Technology), Alessandro Moschitti, and Giuseppe
Riccardi (University of Trento), Barbar Cutler, Jim Hendler, and Selmer Bringsjord (Rensselaer
Polytechnic Institute).
Considering IBM’ size and its huge number of employees, it was a big challenge for them to dedicate
a lot of resources to the innovation effort, keeping the needed flexibility to catch new opportunities
and turns, while, at the same time, continue with their traditional business. To better tackle the
challenge, in January 2014 IBM announced they were spending $1 billion to launch the Watson
Group, a new business unit dedicated to the development and commercialization of cloud-delivered
cognitive innovations.
IBM also had to make some bold structural moves in order to create an organization that could both
function as a platform as well as collaborate with outsiders for open innovation. They carved out The
Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They
brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a
consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business
unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw
out the annual planning cycle and replaced it with a looser, more agile management system. In
monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time
about customer innovations. I have to balance between strategic intent and tactical, short-term
133
decision-making. Even though we’re able to take the long view, we still have to make tactical
decisions.” new, semi-autonomous agile units
Today, as the wave of digitization continues to grow and envelope all the world’s enterprises, IBM is
at a crucial juncture once again. CEO Ginni Rometty is leading the company into new areas, betting
big on its Watson software and cloud computing. But these new services have yet to grow fast enough
to supplant the profit declines in the company’s eroding legacy products.
This time, the transformation IBM faces is far more difficult, for two reasons. The first is simply the
profound scope and depth of the change. IBM is leading the charge for a radical (and therefore risky)
transformation in its client companies: using automated expertise on a large scale to efficiently solve
problems too huge and complex for humans to conquer on their own. The second is that IBM’s cloud-
based services will likely cannibalize three IBM mainstays: computer hardware, software, and data
center services. By making such a bet, IBM will face much greater resistance than Gerstner faced,
both internally (business units protecting turf) and externally (investors decrying the poor short-term
results and future risks).
To pull off this transformation, IBM has to make a number of changes in thinking and practice, and
in its culture, that can be especially hard for established companies.
6.2.1. Company Transformation
According to Philipp Gerbert, the C-level involvement is a fundamental aspect for all the companies
that are willing to work with artificial intelligence. In particular, during the interview, he said that to
deal with AI, a new structure of both centralize and decentralize activities is needed. Large companies
need to embrace the adaptive and agile ways of working and setting strategies that are common to
start-ups. It is important that executives identify where AI can create most the significant and durable
advantages. To successfully identify these areas, managers should be familiar with the current and
emerging capabilities of the technology and the required infrastructure.
A research, published by the MIT Sloan Management Review, on September 2017, reveals large gaps
between today’s leaders, companies that already understand and have adopted AI, and laggards.
Interestingly, the leaders not only have a much deeper appreciation about what’s required to produce
AI than laggards, they are also more likely to have senior leadership support and have developed a
business case for AI initiatives. Very often, for large companies, the culture change required to
implement AI is daunting.
134
All these challenges are equally experienced by AI system providers, like IBM, and by its clients. As
described in the former part of this chapter, Watson was born into IBM’s laboratories, and, only in a
following phase, it was spread throughout the organisation. To successfully diffuse the knowledge
and skills related to the cognitive systems in all the IBM offices around the world, IBM adopted an
interesting strategy. They developed an internal challenge, targeting all the employees from all the
offices, with all the different responsibilities. The main objective of the challenge was to teach IBMers
how to use (and sell) the cognitive technology, and to do that, a cultural transformation was required.
Managers were in charge of the coordination of the different teams participating to the challenge, and
had the possibility to have a direct contact with Watson and its features, acting as technology
sponsors. This event was called “Cognitive Build” and its details and dynamics will be analysed in
depth in the next chapter. Cognitive Build was extremely successful both in term of number of
participants and practical results achieved, for this reason, IBM decided to offer a Cognitive Build
like service to its clients, in order to help them cope with one of the biggest challenge that a company
has to face while moving into the cognitive world. The notion that executives and other managers
need at least a basic understanding of AI is echoed by executives and academics. J.D. Elliott, director
of enterprise data management at TIAA, a Fortune 100 financial services organization with nearly $1
trillion in assets under management, adds, “I don’t think that every frontline manager needs to
understand the difference between deep and shallow learning within a neural network. But I think a
basic understanding that — through the use of analytics and by leveraging data — we do have
techniques that will produce better and more accurate results and decisions than gut instinct is
important.”
As already said in the literature chapters, general-purpose technologies, like the steam engine,
electricity, and now information technology, always take several decades to unleash their full
potential, because businesses need to learn and to organize themselves to best leverage their power.
Philipp Gerbert, during the interview said that the increasing intelligence of machines will be wasted
unless businesses reshape the way they develop and execute strategy. Businesses leaders must start
thinking now about how they can integrate their two key assets, people and technology, or risk falling
behind, and this is what IBM is doing both with its own business, and with clients.
135
6.3. Watson Functioning
Watson started out as a single natural language, Question Answering API (Application Programming
Inteface); today, it consists of more than 50 technologies. Its offering can be sorted by eight main
focus areas: Commerce, education, financial services, Health, IoT, Marketing, Supply Chain, Talent
and Work.
Roberto Villa, during the interview, explained what were the most significant steps during Watson’s
development:
1. Technological improvement: the supercomputer became capable of processing an enormous
set of different data types very quickly. Those data sets are usually referred to as Big Data.
The characteristics that a cognitive system should possess are: value, variety and velocity.
The technology is no longer limited to be an instrument for data elaboration, but it is a perfect
ally to understand and interpret elaborated data;
2. Bluemix development: it is a cloud platform that gives easy access to everybody who’s
interested in the technology. Without Bluemix, Watson would have been a very expensive
software, only wealthy companies would have access to. Thanks to the cloud technology
instead, every entrepreneur, whether she is working for a large company, a new start-up or
she is still a student, can access Watson technology and build with it. It was announced as a
public beta in February 2014 and made generally available in June 2014. The Bluemix
platform is not only an enabling factor for the diffusion of Watson, but it is one of its building
block. It is in fact, necessary to use the Bluemix platform, to develop any Watson’s
application. Bluemix is a cloud platform as a service (PaaS) developed by IBM. It supports
several programming languages and services as well as integrated DevOps (Development and
Operations: it is a culture that automates the process of software delivery and infrastructure
changes fostering the collaborations among different IT professionals) to build, run, deploy
and manage applications on the cloud.
136
Figure 6.7 – Interactions Between Bluemix Architecture, Clients and Developers (IBM, 2016)
Roberto Villa defined the cloud platform as a business model enabler: it allows cost optimization and,
gives the customers the possibility to focus on their business.
With Bluemix developers can use IBM services to create, manage, run and deploy various types of
applications for the public cloud, as well as for local or on-premises environments. IBM in fact offers
three deployment models for IBM Bluemix, public, dedicated or hybrid. The cost of IBM Bluemix
cloud platform varies depending on the resources used, runtime, support and other factors. The pay-
as-you-go tier allows users to pay only for the resources they use, and includes half a GB of runtime
and container memory for free. The subscription tier includes a fixed monthly bill for public,
dedicated or local environments. It also provides custom discounted prices.
As Nicola Palazzo stated during the interview, the cloud platform enables everyone to adopt a “do it
yourself” approach while planning and developing a new application or a business concept, however,
this does not imply a whole pull back of IBM. On the basis of the customer’s needs and possibilities,
IBM offers a wide range of supports: from the completely do-it-yourself development to the
completely guided and supported process.
Another fundamental aspect in the development of the Watson offering has been the creation of an
ecosystem. IBM Watson Ecosystem launched in November 2013, and it is a partner program for
companies and start-ups to leverage IBM Watson services.
137
To date, more than 1,500 individuals and organizations have contacted IBM to share their ideas for
creating cognitive computing applications that redefine how businesses and consumers make
decisions. In fact, global developers have created and plan to go to market in 2014 with Watson apps
across a variety of industries.
The IBM strategy to capitalize the innovation of Watson find its expression also with a $100 million
venture fund to support start-ups and businesses that are building Watson-powered apps using the
“Watson Developers Cloud.” The results of the investment arrived promptly as more than 2,500
developers and start-ups have reached out to the IBM Watson Group since the Watson Developers
Cloud was launched in November 2013.
Apart from the official Watson Ecosystem, there is another ecosystem that is strictly connected with
IBM Watson, and it is the one related to the Internet of Things (IoT).
The Internet of Things is the internetworking of physical devices, vehicles, buildings and other items,
embedded with electronics, software, sensors, actuators, and network connectivity that enable these
objects to collect and exchange data.
The reason why the Internet of Things finds an obvious link with IBM Watson is that the vision of
this field has recently evolved due to a convergence of multiple technologies, including those
technologies that have been the focal point during the development of IBM Watson.
The main difference between the two ecosystems (Watson and Watson IoT) is that the one related to
the Internet of Things has been built far before the idea of IBM Watson came up. It is not a new field,
it already has applications and companies around the world partnering to create new connected
devices, but the arrival of the cognitive era and the creation of a powerful technology like IBM
Watson opens up a tremendous number of new possibilities and applications for the IoT.
Phill Westcott, European Ecosystem Leader of IBM Watson Group, on November 19th 2015, during
the IBM Watson official launch of its Ecosystem Partner Program in the Netherlands at the How to
Get There Summit (HTGT), said: “We are looking for start-ups and tech companies that have [these]
disruptive solutions. We are going to partner with them and actually developing their business and
then we are both going to share that reward. “.
The idea of the ecosystem is to create a synergy between the company providing the enabling
technology (IBM) and the partner firm.
One of the pillar of the Watson business model, which is enabled by the Bluemix cloud platform, is
the API concept. An API, which stays for Application Programming Interface, is a set of subroutine
definitions, protocols, and tools for building application software. In general terms, it is a set of clearly
defined methods of communication between various software components. A good API makes it
138
easier to develop a computer program by providing all the building blocks, which are then put together
by the programmer.
In the last few years a new era of IT computing powered by the API Economy has emerged. It allows
companies to reimagine their business processes, customer experience, and introduce innovations in
new products and services. There are five main reasons why companies should embrace web APIs
and become an active participant in the API Economy:
1. Grow customer base by attracting them to the company’s products and services;
2. Drive innovation by capitalizing on the composition of different APIs, own and third parties;
3. Improve the time-to-value and time-to-market for new products;
4. Improve integration (across channels) with web APIs;
5. Participate in a new era of computing and prepare for an uncertain future.
By working with APIs, IBM can help other companies to move into the API Economy through the
use of its platforms, tools, and resources to make the transition seamless and natural. To provide a
high quality and always updated offer, IBM is working with developers and third-party providers to
advance a marketplace that enables the sharing of web APIs and other essential capabilities. The
business model based on APIs, consists in a commercial exchange of business functions, competences
and capabilities of an organisation, as open digital services. In practice, as also Roberto Villa
explained during the interview, IBM provides though the Bluemix platform, an APIs’ marketplace.
Companies that are willing to innovate their business, start-ups with a brand-new concept to develop,
can access the platform and purchase all and only the building blocks they need to build their
application. They do not have to pay for an “APIs package” when they might be interested in just a
part of the whole. They can purchase a 100% customised set of APIs and pay for the thing they
actually use in their business. Phil Westcott, European Ecosystem Leader at IBM Watson, said: "We
now live in an ‘API economy’ where anyone with a vision can rapidly deploy ground-breaking
applications without the need for huge capital investment in IT infrastructure, nor an army of software
developers... This is allowing small businesses to compete and disrupt in every industry. The platform
will enable entrepreneurs and developers to rapidly prototype cognitive solutions powered by the
Watson API services.”.
The APIs model is dynamic. The offer is continuously updated with new, improved capabilities. Some
of them is developed internally by IBM researchers, others instead, are developed within partnerships.
When a company needs a functionality, which is still not part of the IBM offers, they can partner with
IBM to jointly develop it, in order to power their business. At the end of the development session
139
though, the new program goes through a standardization process, making it available and useful to
any new developer accessing the marketplace.
At its debut the Bluemix platform was constituted by eight APIs for cognitive building:
1. Language Identification: which can determine what language a given text is written in (from
a predetermined set of 25);
2. Machine Translation: which translates text between multiple language pairs;
3. Concept Expansion: which can take a colloquialism (say, "tri-state area") and map it to a set
of meanings based on context (New York, New Jersey, and Connecticut);
4. Message Resonance: which can determine the popularity of a given word with a
predetermined audience;
5. Question and Answer: which provides "direct responses to user inquiries fuel by primary
document sources." Topics in health care ("What causes scabies?") and travel ("Which
museums are in Manhattan?") are offered as the first two knowledge bases;
6. Relationship Extraction: which can parse sentences to determine the relationships between
components as a way for other analytic systems to better understand the significance of what's
being discussed. For example, if fed "Mark Wahlberg spoke yesterday about his new film," it
would understand that "Mark Wahlberg" is a person, "yesterday" is a time reference, and
"film" is the object of the sentence;
Figure 6.4 – API Development Cycle
140
7. User Modelling: which employs linguistic analysis to make predictions about someone's
social characteristics from a supplied text;
8. Visualization Rendering: which generates data visualizations from different kinds of data, not
merely pie or bar charts, but also the likes of flow charts and node graphs.
Today they have doubled their number, and can be divided into five different categories: language,
speech, vision, data insights and embodied cognition. The category which is the most explored is the
language, with nine dedicated APIs.
6.3.1. Learning Watson
New products are proliferating, and research shows that the most successful among them have clearly
benefited from customers’ ease in learning the advantages and applications of those products quickly,
in finding out how the devices actually work, and in sharing their knowledge with friends.
The possibility to learn and explore how to build with Watson are numerous as IBM has the need,
more than the pleasure, to educate a large audience to use its new technology:
• Watson Webinars and Guides: the first option is to attend free webinars on the official Watson
website. Those webinars can be considered as technical deep dive sessions, that give the user
the means they need to start developing cognitive application on Watson Developer Cloud.
The webinars are held on a weekly basis and old sessions are available for replays for free.
During the webinar it is possible to interact with the presenter through a series of options
available at the bottom of the page. There is a “Help” button that redirect the user to a menu
with FAQs and other information, it is possible to manage the slides and the media player,
and there is a specific section where the attendee can ask questions and get answers. There is
also a resource list, that allows the user to deepen the webinar topics and an appreciation
survey. Finally, one can get information about the speakers and post some comment on IBM
Watson twitter profile. To attend a webinar it is necessary to register to the event. On the
registration page there is a summary of the webinar contents and speakers, in addition to the
date, the time and the duration. On Watson website, there is section named “Built with
Watson” where, in a blog form, there are stories of how cognitive computing is transforming
our world. Among the blog articles, it is possible to find Watson Guides, technical articles
that guide the user step-by-step showing them how something works on Watson and how they
can do it.
141
• IBM Watson Academy: IBM Watson Academy is the main asset to learn about cognitive
computing. The materials provided on the website is both for beginners or experts. It is
possible to search for a specific topic and to enrol in a course. Some course is available by
invitation only. In the right side of the “Course Overview” page, there is a list of the online
users and it is possible to chat with them. The educational material is divided into six groups:
Cognitive Solutions, Watson Platform, Watson Health, Watson Analytics, Watson Internet of
Things, Other Watson Resources. The courses are free both for IBMers and for external users,
the possess of an IBM ID is the only requirement. IBMers can log in with their IBM.com
email address and get access to all the contents. During the educational process, the user can
earn badges. There are eight different kind of badge (Explorer, Advocate, Achiever,
Excellence, Apex, Inventor, Profession Certification and Professional Certification) and they
are helpful for users that may easily and quickly share verified proof of their achievement
wherever and whenever they choose. Practically speaking, besides being a recognition of the
student’s achievement, the badges also help decrease the information asymmetry between the
student and an external entity. The badges can be earned by anyone apart for few that are
reserved for IBMers. A project that complete, in some way, the IBM Academy offer, is the
Think Academy. Think Academy is the IBM's social and digital platform to educate
employees, clients, partners and friends on growth topics critical to the company's success.
• Watson Developer Conference: another important learning possibility, mainly addressing
developers, is the Watson Developer Conference. The first ever conference took place on
November 9 and 10 2016 at the Innovation Hangar in San Francisco, CA. It is a terrific chance
to learn about Watson latest technologies, to meet the Watson team and to talk with partners
that are already building with cognitive solutions. The conference includes laboratories,
conversation with leaders, certifications (attendees can be certified as Watson Application
Developers) and code competitions. The number of participants is limited, and registration is
required. The cost to attend the conference is of $295.
• Watson DeveloperWorks: it is a community, a platform through which developers can learn
and grow their skills with how-to tutorials and courses, develop and start building and
deploying great apps using IBM product trials, free downloads, and cloud service and, finally,
make connections, get answers, and interact with IBM experts and developers in the
Developer Centers, forums, and blogs. For each relevant topic, it is possible to find tutorials
and training on article or video (DeveloperWorks TV) form but also practical tools or code
string that are available for free or with a payment. It is possible to buy a premium account
on DeveloperWorks, whose basic program costs $398,96 per year. Thanks to the membership,
142
developers can build new skills on the latest Cloud technologies such as Watson, mobile, and
IoT using powerful tools and services. Enhance their marketability and advance their career
with educational resources, certifications, videos, and online books from top publishers. In
addition, premium members can attend key developer events at a discount. There are many
communities around IBM products and each of these community has a specific central topic
to deal with and specific dynamics. In particular, there are 49 different communities, grouped
in 13 categories. The IBM communities are also called Developer Centers. Their aim is to
make users learn from the experts and share with other developers in order to grow faster.
There is a blog, where articles concerning the different topics around IBM world are published
by featured authors coming from outside the IBM boundaries.
• Learning Lab: Learning Lab is a service addressing makers, developers and problem solvers.
It offers 96 online courses through which one can increase their skills and 35 use cases. The
catalogue of online courses is divided into two sections: development and marketing. These
courses are organized by IBM itself, through platforms like IBM Watson Academy and
DeveloperWorks or by IBM partners like Codecademy, Coursera and Big Data University.
The courses of the Learning Lab are available on the Marketplace, and some of them is granted
for free.
Through the above listed information and education means, IBM is trying to spread the knowledge
and the skills connected with the use of Watson. Remembering what Rosenberg and Trajtenberg
highlighted while studying the diffusion of another GPT, the steam engine, the education strategy
might be very helpful to foster the steering of the technology in different sectors, as they noted that
the spill-over level played a relevant role in the GPT’s diffusion.
143
Empirical Setting
Watson Journey:
From the Laboratories to the Market
7
144
7.1. Introduction
As already said in the previous chapter, the chosen case study has many different and interesting
peculiarities, which is worth investigating. In particular, this chapter describes the commercialisation
strategy IBM implemented, starting from the very beginning, when Watson was only a crazy project
in the hands of a group of researchers. The commercialisation of a new technology is a tricky phase
for any company that wants to innovate, whether it is a stat-up or a large company. Yet only a few
radically new IBM technologies have moved smoothly from lab to customer and produced substantial
new revenues. Like many companies, IBM often lacks clear mechanisms for fitting innovations into
its already complex product portfolio. One executive expressed a common frustration within the
company by saying, “At IBM, new products aren’t launched, they escape.” Probably, because of the
awareness regarding the complexity of its mechanisms, and the lessons learnt from its long history of
innovation, the launch of Watson has been scrupulously planned. A number of initiatives, both
internal and external to the company has been implemented, and the process is still ongoing.
7.2. Watson Commercialization
The process of introducing a new product or service into commerce, making it available on the
market, is always a challenging phase, with a high-intensity level of investments. The
commercialization process has three key aspects:
1. The funnel. It is essential to look at many ideas to get one or two products or businesses that
can be sustained long-term;
2. Commercialization is a stage-wise process, and each stage has its own key goals and
milestones;
3. It is vital to involve key stakeholders early, including customers.
Because of these characteristics, it is important to identify the proper commercialization strategy
during each phase of the product development. A common and successful approach is to implement
contemporary more than one initiatives, in order to reach different market players. Entrepreneurs
seeking to commercialize their technical innovations often rely on partnerships, such as technology
145
licenses or strategic alliances, with other organizations. They do so to complement the organizational
skills or assets the new enterprise may not possess. An important obstacle to a cooperative
commercialization strategy, however, is convincing the partner of the value of the firm’s technical
invention. What IBM is doing through its commercialization strategy, is two-folded: on one hand, it
is working to build trust and confidence around Watson, thanks to big and largely advertised
conferences, on the other hand, it is keeping exploring the technology possibilities and enlarge its
reaching through a series of internal and external activities. A particular aspect of the spreading of
the technology is posing some additional challenge to IBM, comparing to other technology: the
geographical expansion. Unlike simple speech recognition, Watson understands context and learns
continuously over time, just like humans. It works to grasp the real intent of the user’s language (and
even tone), and apply a broad array of linguistic models and algorithms to extract logical responses
and draw inferences to potential answers.
By teaching Watson different languages, it is more likely that IBM can spread the technology
throughout the globe, which is another way IBM is attempting to make Watson a big business.
However, the move to geographically spread the adoption of the Watson technology, carries a higher
number of issues comparing to the spread of others, more traditional, physical products or services.
The first step of the global expansion is teaching Watson the local languages to facilitate the use by
companies where English is not the main language. Being a cognitive system, the way IBM Watson
learns a language goes far beyond simple translation. Watson is trained to understand the cultural
context of a word, sentence and nuances of unique idiomatic expression. IBM is “actually teaching
Watson to understand the grammar, the nuances of the culture, and how the spoken word handles the
nuances of meaning,” said Rhodin, senior vice president, IBM Watson Group.
7.2.1. Watson Conferences
Communication plays a fundamental role in all facets of business, in particular the commercialization.
Without good business communication, the internal and external structure of a business can face
numerous challenges that can ultimately lead to its demise. External business communication is any
information the company distributes to the public, either about the organization itself or their products
and services. Event marketing is a promotional strategy that involves face-to-face contact between
companies and their customers at special events. The practice works because it engages consumers
while they are in a willing, participatory position. A successful event marketing campaign provides
value to attendees beyond information about a product or service. It is with this mindset that IBM
146
organised a series of Watson related events throughout the last years. The events took the form of
conferences.
The first Watson conference took place on October 8th 2014, in occasion of the opening of IBM's new
global Watson headquarters at 51 Astor Place in Silicon Alley, that served as the venue. It was after
almost three years from its debut on Jeopardy!.
The event wanted to reach the highest possible number of people, it wanted to inform the world about
the Watson technology, about its potential and about the initiatives and partnerships already carried
out, it wanted to tell about the future chapter IBM is going to write with this new technology. To
reach the public over the Watson headquarter boundaries, it was given free access to follow the event
on streaming on the IBM Watson Youtube channel. The same solution has been used for all the others
Watson conferences. It was a small event comparing to the followings of 2015 and 2016 that have
attracted an always growing number of people. The first real World of Watson conference took place
on May 4-5 2015, at the Brooklyn Navy Yard, New York. It hosted 1500 people. The conference was
sold-out, grouping people coming from 26 industries and 32 countries. It was designed to demonstrate
how the supercomputer is being used across 17 different industries. The name the company chose to
give to the event is meaningful, it wants to remark that in few months from its first presentation, IBM
Watson has evolved from a system to an ecosystem, becoming a whole set of partners, applications
and new projects. During the opening, the IBM Chairman, President and CEO Ginni Rometty shared
the Watson Group’s vision saying: “What we are doing TOGETHER will change the world”.
Figure 7.1 – World of Watson 2016
147
The interest around the event has grown tremendously and the last World of Watson conference took
place on October 24-27 2016 in Mandalay Bay, Las Vegas and drew 17,000 attendees, said the
company. IBM's conference, explored how companies, including retailers, educators, human
resources departments and financial institutions, among others, can use Watson. The price for
attending the whole World of Watson conference was of $2,495, the Cognitive Experience was at
$1,095 and the Discover Experience was at $109. There were discounted prices for client and business
partner groups. The conference hosted 48 speakers, that are mainly classifiable into two categories:
Watson experts and Watson users. Watson Experts are, generally, IBMers that shared their vision
about the future Watson is going to shape in the next three to five years and that talked about its
history and the results it already accomplished. Among the Watson users there are professionals
coming from very different industries and with different Watson experiences. The sharing of their
experiences, coupled with the practical laboratories and workshops wanted to communicate to the
audience that the Watson cognitive system is real, and it is already working and improving lives.
Some areas were devoted to deliver the audience cognitive experiences, through booth developed
with IBM partners. The IBM World of Watson was also available on the app IBM Events. “The
technology is not even moving fast. It's accelerating. It's moving faster and faster every day," said
John Kelly III, senior vice president of Cognitive Solutions and IBM Research. In order to reach all
the interested public, the majority of the contents of the conference are available on demand on the
platform IBMGO.
7.3. Watson Activities
In order to assure the fastest and deepest technology development, to not miss any chance of
exploitation of the largest number of IBM Watson’s applications and services, and to collect the
biggest amount of new ideas and projects that might use IBM Watson, IBM have set a series of
initiatives, both conducted inside the company and directed to external stakeholders. The initiatives
have evolved and changed over time, in accordance with the Watson development phase. In
particular, two variables were the main object of the changes:
• Purpose of the initiatives: at the beginning, all the activities around Watson had an explorative
aim. The technology was far from reaching its performance potential and there was not a clear
development or marketing direction for it. The activities in this early stage were oriented to
increase the technology capabilities, trying to obtain higher processing power, both in term of
148
depth of analysis and elaboration time. With the improvement of the technology’s
performance, the purpose of the initiatives started to gradually shift from the pure research
based around the technology, to a more commercial side, looking for possible applications in
which the technology could be integrated;
• External actors’ involvement: in the first phases, the majority of the activities were carried out
internally, to be more precise, in the company’s laboratories were researchers worked around
the technology to create Watson. Few collaborations, with carefully chosen partner (always
universities or other research institutions) were establishes to help IBM proceed with the
technology exploration and development. A second phase showed a higher openness through
external actors. The level of sophistication of the technology was sufficient for settling new
commercial partnerships with the aim of developing marketable applications. Besides, some
occasional relationship was established thanks to the organisation of large audience’s
hackathons. Contemporary to the openness in behalf of the market, IBM experienced an
internal transformation, while bringing the new born technology from the laboratories to all
the employees. Considering the company size, it was not a small of a challenge.
The different activities can be classified in four categories:
1. Classical research: internal research, research partnerships with universities and companies;
2. Partnership: commercial partnership with companies to develop Watson-based applications;
3. University program: to find new uses and deepen the systems’ capabilities;
4. Hackathon: with external actors (start up or students) or IBMers.
Figure 7.2 – Timeline of the Different Activities Supporting Watson’s Commercialization
149
7.3.1. Research
In general terms, the goals of IBM Research are to advance computer science by exploring new ways
for computer technology to affect science, business, and society.
IBM research and development efforts are spread all around the world through twelve laboratories
located in six different continents. IBM classifies the different laboratories (but one) by geographical
area: Africa, Almaden, Austin, Australia, Brazil, China, Haifa, India, Ireland, Tokyo, Watson (New
York) and Zurich. It is the largest industrial research organization in the world.
Research topics related to the most important industry and technology trends, at a worldwide level,
are shared among all the laboratories, while each laboratory deals with other topics that emerge from
their specific economics and social context.
The past few years have been landmark years for the discussion around artificial intelligence and its
potential impact on business and society. Through its laboratories IBM explored a fascinating and
diverse set of issues related to the powerful cognitive technologies that are emerging to augment
human capacity and understanding. Currently, project related to the cognitive computing are carried
out in every IBM laboratory and update about new applications and results are frequently available.
As already said, IBM Research was looking for a major research challenge to rival the scientific and
popular interest of Deep Blue, the computer chess-playing champion (Hsu 2002), that also would
have clear relevance to IBM business interests. With QA in mind, we settled on a challenge to build
a computer system, called Watson, which could compete at the human champion level in real time on
the American TV quiz show, Jeopardy!. The journey to the success was not straightforward. Early
on, in the Watson project, the research team attempts to adapt PIQUANT (Chu-Carroll et al. 2003) to
compete in Jeopardy! failed to produce promising results. They devoted many months of effort to
encoding algorithms from the literature, but any attempt had a very low impact on the objective. After
many failures, the research team understood that, if they were to obtain different results, they had to
do things differently. One of the leader of the Watson project, during an interview to the AI Magazine
in 2010, said: “We ended up overhauling nearly everything we did, including our basic technical
approach, the underlying architecture, metrics, evaluation protocols, engineering practices, and even
how we worked together as a team. We also, in cooperation with CMU, began the Open Advancement
of Question Answering (OAQA) initiative. OAQA is intended to directly engage researchers in the
community to help replicate and reuse research results and to identify how to more rapidly advance
the state of the art in QA” (D. Ferrucci)
150
As the results dramatically improved, they observed that system-level advances allowing rapid
integration and evaluation of new ideas and new components were essential to the progress. One
important aspect of the research journey that the team was undergoing, has been the early involvement
of external actors in the development process. Different actors were involved through different
relationships and with different aims:
• Workshop: workshops are short events, usually they are daily events, were the company has
the opportunity to quickly test on a large and skilled audience their research advances,
collecting at the same time interesting suggestions about which aspects have to be deepen and
which have no future. Usually, only one feature is taken into consideration during each
workshop, they tend to be highly specific. For this reason, there is no need for the audience to
have a complete knowledge about the product and the research’s final objectives. For
example, to help them understand they were heading to the right direction, it was fundamental
that their first guesses were echoed at the OAQA workshop for experts with decades of
investment in QA, hosted by IBM in early 2008. Among the workshop the team allows
component results to be consistently evaluated in a common technical context against a
growing variety of what were called “Challenge Problems.” Different challenge problems
were identified to address various dimensions of the general QA problem.
• Research agreements: the team have leveraged many collaborations with university
partnerships to drive Watson to its final goal, and took the help openly advance QA research.
For example, during the preparation to the competition experiments were performed in
collaboration with Carnegie Mellon University (CMU) using OpenEphyra, an open-source
QA framework developed primarily at CMU. Other collaborations were leveraged in a second
phase, after Watson has defeated its Jeopardy!’s rivals, to move the technology from its
embryonic stage to a commercial viable one. By 2012, there were two healthcare organisations
that had started piloting Watson. Wellpoint, one of the US biggest insurers, was one of the
pair of companies that helped define the application of Watson in health. The other was
Memorial Sloan-Kettering Cancer Center (MSKCC), an organisation IBM already had a
relationship with and which is located not far from both IBM's own Armonk headquarters and
the research laboratories in York Heights, New York that still house the first Watson. And it
was this relationship that helped spur Watson's first commercial move into working in the
field of cancer therapies.
During the journey that conduced IBM researchers to give birth to Watson, the team introduced in
the research process an element that turned out to be a fundamental game changer: experimentation.
151
Rapid experimentation was a critical ingredient to Watson’ success. The team conducted more than
5500 independent experiments in 3 years, each averaging about 2000 CPU hours and generating more
than 10 GB of error-analysis data. The main element that led to the introduction of the rapid
experimentation, was the upgrade of the technology, which acted as an enabler of new research
practices. Without DeepQA’s massively parallel architecture and a dedicated high-performance
computing infrastructure, they would not have been able to perform these experiments, and likely
would not have even conceived of many of them.
The research never stops, and that is what IBM ThinkLab is for. It is a bridge between IBM Research
laboratories and IBM clients. It is a place where scientists and other IBMers work in close
collaboration with clients to imagine their future, prototype new technology solutions, and coordinate
pilot projects. If the IBM laboratories are considered the innovation engine of IBM processes and
technologies, IBM ThinkLab is the innovation engine of IBM’s clients.
The first ever IBM Research ThinkLab opened in 2014 within the research headquarter in Yorktown
Heights, NY.
The IBM ThinkLab is the place where usually industry pioneers come to search for IBM innovations
and technologies that can enable their work, they want to go beyond the current boundaries of their
industry and imagine what their future could be like.
The process carried out inside the ThinkLab is very interactive and it combines the digital and
technical aspects with a participative approach. The process is very engaging, putting together the
best mind of IBM and the best mind of the clients, exploiting their different backgrounds to generate
the most innovative ideas.
The process does not stop at the creation of a new concept, but it goes further to make it real, with a
prototype and then a pilot phase.
IBM can rely on 3000 researchers in 12 laboratories spread all around the world and in each
laboratory, there is a space devoted to the ThinkLab. As each lab is specialized in a certain research
area, clients can choose to go to any of the lab in accordance with their needs and not their
geographical location. The reputation element here plays an important role to make clients trust IBM
and begin the innovation path. With 22 years of patent leadership, that is a simple task to tackle for
IBM. The ThinkLab work is classified in five categories, even though each project is extremely
customised on the client needs. The categories are: Yield Optimization, Dynamic Fulfilment,
Computational Biology, E-Commerce and Personalized Learning. It works by invitation only, there
is the possibility for a company to apply through email. The characteristics a company should have
to be taken into account to begin a project with ThinkLab are:
152
• A challenge that can't be solved through existing commercial capabilities
• A desire to experiment with advanced technology
• A goal to create market disruptions that fuel growth
• A willingness to take calculated risks
• A shared belief in the power of co-creation
7.3.2. Partnership
The requirements for innovation today are entirely different from those of the last 30 years. The
technology-driven disruption model that brought the world computing, the internet, and mobile apps
is no longer sufficient. For this reason, IBM has leveraged partnerships throughout the entire process,
from those established even before Watson tackled Jeopardy!, to those commercially driven.
Partnerships are often established, in order to exploit the combined power of the two companies
participating at it and both benefit from the relationship. It should be a win-win. For these reasons,
later on, IBM developed a simplified process to launch new Watson-related partnerships. As a
partnership is not a seller-buyer relationship, but a two-sided one, where both the companies have an
interest in collaborating and growing, the first contact can be made both by the start-up or the tech
company that is interested in reshaping its business through cognitive computing or, by IBM that sees
in a certain company a good partner. In the first case, it is easier, as on the ecosystem website,
everyone can enrol and become a partner by filling a form. However, during the interview with Nicola
Palazzo, it emerged that apart from the oncological case study, which was firstly proposed by IBM
to the Memorial Sloan-Kettering Cancer Center (MSKCC), all the other implemented case studies
and applications emerged as client requests.
Being aware of the benefit that a partnership can bring and considering the fact that few companies
will be willing to explore a new technology taking the entire risk on themselves, IBM developed a
specific plan for partnerships. IBM PartnerWorld is, in fact, a program run by IBM that offers
resources and benefits to help channel partners promote and sell IBM products, including IBM
Watson.
The benefits and support available through PartnerWorld are designed to help members across the
sales cycle, whether they are a reseller, a consultant or integrator, an independent software vendor, a
member of the cloud ecosystem or all the options. Registering in the PartnerWorld program guarantee
the access to an incomparable portfolio of solutions and to a global sales force that provide all the
153
tools to exploit the important world trend like: cloud, big data&analytics, mobile devices, social
business and security. IBM PartnerWorld promise growth, learning and profit.
As already said, partnerships have been a constant during the entire process, from the development
of the basic technology, to the conceive of the first commercial application to the spread of Watson
through many fields. Not all the partnerships have the same dynamics and rules. According to the
purpose of the partnership, it is possible to classify the partnership leveraged by IBM into three main
groups:
1. Research partnership: this typology of partnership is usually of a medium-long term, and it is
focused around the technology itself or, more frequently, a particular aspect of the technology.
The teams usually work side to side and there is a high level of disclosure around the
technology features and functionalities. The aim of this kind of partnership is both to exploit
the knowledge and expertise of another company, which is usually a market expert or a
research institution (e.g. universities), with its own laboratories and research facilities, and to
share the costs of one of the most expensive phase of the innovation process which happens
when the risk of failure is still high. Usually research partnerships are more frequent in the
first phase of the innovation, after the product has been introduced in the market, there can be
research partnership focused in deepen a specific aspect of the technology, for example to
develop a new feature.
2. Commercial partnership: this typology of partnership has been the most frequently adopted
by IBM, and the number of partnership is increasing with the spreading of the technology.
Considering that IBM is introducing a new, disruptive technology in the market, established
companies that are willing to adopt Watson have to undergo a profound transformation of
their processes and practices. On the other hand, in the case of start-ups, they might lack the
resources needed to successfully tackle the market. In both cases, a partnership that offers
them the support from a technical and commercial point of view is a safer alternative than
going solo. Commercial partnerships are of different nature considering the life cycle phase
of the technology. During the introductory phase, the involvement of IBM is higher because
the technology has to be shaped on the partner’s needs, while in a second phase, when the
majority of the features are already available for the partner, the role of IBM in the partnership
is mainly of support;
3. Marketing partnership: this kind of partnership has the aim of spreading the knowledge about
a new technology outside the traditional sectors that might be interested in it. It is of
fundamental importance, especially in the early phases of the commercialisation process, to
154
raise the awareness of the highest number of potential users, around a new technology. The
higher the awareness, the higher the likeability of adoption. Companies can adopt different
strategies to obtain this kind of results, however, in the Watson case, the partnership has been
a successful option.
From the research point of view, it was very important the, already mentioned, partnership with the
Carnegie Mellon University, that since 2008 worked with IBM on the development of a first-of-its-
kind open architecture that enabled researchers to efficiently collaborate on underlying QA
capabilities and then applied them to IBM’s Watson system. In 2011, eight more universities joined
IBM researchers to advance the Question Answering (QA) technology behind the "Watson"
computing system. The collaborations were announced few days before the quiz show, Jeopardy!,
was broadcasted. Among these eight collaborations, there are foreign universities like the University
of Trento (Italy). After these first collaborations, many other universities joined IBM to learn about
Watson and contribute to its development and diffusion.
Of a different nature, were the partnerships established right after the success of Watson at Jeopardy!.
IBM started to understand the potentiality of the technology also from a commercial point of view
but, Watson’ state of art was still far from being commercially exploited. A lot of research was needed
to transform a QA technology into something useful, that companies would adopt. During the
interview with Roberto Villa, he explained how the Watson moved its first steps into the market: IBM
looked for those conditions that could be leveraged to create value. In the Watson case, these
conditions are: availability of a large data set (Big data), and the need to manage critical situations.
With these premises, the natural candidate for Watson’s first commercialisation attempt, was the
health sector. In 2011, IBM and Nuance Communications, announced a research agreement to
explore, develop and commercialize the Watson computing system's advanced analytics capabilities
in the healthcare industry. Being the first time IBM tried to put Watson into a marketable application,
they decided to take advantage also of the ongoing collaborations with universities. In particular,
Columbia University Medical Center and the University of Maryland School of Medicine were
contributing their medical expertise and research to the collaborative effort. The agreement between
IBM and Nuance consisted in a multi-year research initiative targeted to the applications of the
Watson technology to assist in the diagnosis and treatment of patients in combination with Nuance’s
voice and clinical language solutions. In addition, IBM has licensed access to the Watson technology
to Nuance.
The first commercial application of Watson arrived in 2013, as the results of another partnership in
the health sector. For more than a year, IBM has partnered separately with WellPoint and Memorial
155
Sloan-Kettering to train Watson in the areas of oncology and utilization management. During this
time, clinicians and technology experts spent thousands of hours “teaching” Watson how to process,
analyse and interpret the meaning of complex clinical information using natural language processing.
These results represented a fundamental milestone in the Watson innovation process and marked the
beginning of the diffusion of the cognitive technology throughout many other sectors. In fact, from
2014, thanks to the choice of making Watson available through a cloud platform, the number of
partnerships increased dramatically. For the same reasons that led to the health sector, IBM started
exploring the retail and financial sectors, but after them, partnerships proliferated in a number of
application fields, even very far from the traditional ones. For example, in 2015 and 2016, IBM
announced new commercial partnerships across a variety of industry domains and geographies:
Media & Entertainment (Decibel), Energy (Arria NLG), Environment protection (Green Horizon),
Social Network (Twitter), Sport (Formula 1), Hospitality (Hilton), Fashion (Marchesa), E-commerce
(Yoox), Cinema (20th Century Fox), Retail (Macy’s), Agriculture (Gallo Winery), etc.
It is important to highlight that the nature of the partnerships evolved with the evolution of the
technology. To develop a commercial product in the early phases took more than a year of
collaboration, as stated about WellPoint, but every achievement reached within a certain partnership
has been beneficial for the following ones. For example, Roberto Villa, described how quickly the
collaboration between IBM and Yoox obtained the wanted product, taking advantage of an API for
image recognition that was previously been developed for an application in the oncological field,
within another partnership. The possibility of exploit these kind of synergies is one of the main
advantages of developing a large partner’s network. In 2015, IBM announced to have established
more than 270 partnerships in different application fields.
Finally, considering the marketing partnership, it is interesting to learn about the collaboration
between IBM and the culinary magazine Bon Appetit, which gave birth to Chef Watson. The app
inspires home cooks everywhere to discover unexpected flavour combinations to address everyday
mealtime challenges in creative ways and bring new ideas to the kitchen. Apart from the advantages
that the use of Chef Watson can bring to the culinary world, this partnership had a more important
implication in the short term. IBM wanted to make its Artificial Intelligence (AI) system, more
familiar to the world, to all the potential users not directly related with the technology field. For this
reason, it partnered with Ogilvy Paris and chose to use Watson to create something that everyone can
relate to: food. In practice, to bring an awareness about Watson to a wider audience, they created a
challenge: Chef Watson had to come up with recipes where the 4 main ingredients always had to start
with the letters E, A, T & S. “Food Art” was the result, as a way to lure people into loosing deeper
156
into Chef Watson. The figure below shows a possible representation of the network of partners at that
time of the commercialisation:
7.3.3. University Program
The university program offers faculty members and students a range of opportunities to work with
Watson and engage with cognitive computing. The program is addressed to both students and teachers
and it has a double objective: to spread the use of the Watson technology, and to crowdsource new
ideas and possible applications. It encourages participants in developing apps, use the cognitive
technology to build robots or competing in hackathons.
IBM realized that professional figures currently have a lack of skills when it turns to cognitive systems
and this is a factor that can slow the technology diffusion. The need of forming a competent and
skilled workforce, able to spread the use of the cognitive technologies throughout many different
industries is one of the reasons that pushed IBM to collaborate with universities not only in the
laboratories, profiting from their knowledge, but in the classrooms, getting in touch with students.
The educational tools IBM provides for the university program are: full semester courses to learn
about how Watson's cognitive technology works and how to build apps with it, case studies through
Figure 7.3 – IBM Watson Partnership Network in 2015
157
which users can learn how Watson's cognitive technology can be applied across industries, and the
access to the Watson Academy materials, where to enrol in an online course.
The university program has also the objective of finding new business opportunities where Watson
can be applied. These might be applications in new, unexplored fields or new applications in
established markets. To obtain this crowdsourcing effect, IBM offers many opportunities of
engagement for students like in-house hackathons and showcases where to compete against other
students to build prototypes of cognitive apps, the Great Mind Challenge, where to use Watson APIs
to develop algorithms to most accurately identify correct answers within a given data set and the
Watson Case Competition, where students compete against other teams to develop and present the
best Watson business case studies. The Great Mind Challenge (TGMC) is an annual nationwide
software development competition, created by the Academic Initiative of IBM. The first-ever
competition took place in India in 2004. It was created with a focus on the need to better educate
millions of students for a more competitive information technology workforce, able to understand
and utilize all the newest technologies. The program is realized by partnering with colleges and
universities and it lasts about five months. For its nature, TGMC perfectly fit the current needs of the
IBM Watson program, its necessity to grow, to be adopted and utilized by many users and to find
new interesting applications. By 2016, the competition takes place in India, Israel, China, Ireland and
Switzerland.
The first university to receive Watson system has been the Rensselaer Polytechnic Institute, in
January 2013. The university was already an IBM partner in the research of cognitive technology, but
the collaboration this time has a different objective. The arrival of the Watson system wanted to
enable new leading-edge research at Rensselaer, and afford faculty and students an opportunity to
find new uses for Watson and deepen the systems’ cognitive capabilities. The first-hand experience
of working on the system would have also shaped students as future expert in the areas of Big Data,
analytics, and cognitive computing. With 15 terabytes of hard disk storage, the Watson system at
Rensselaer will store roughly the same amount of information as its Jeopardy! predecessor and will
allow 20 users to access the system at once – creating an innovation hub for the institutes’ New York
campus. The access to the Watson system was available for researchers, graduate and undergraduate
students.
One of the most important achievement that has been tackled thanks to a university program was the
development of the first ever legal application powered by Watson. In December 2015, through a
computer science course taught by Engels, Mario Grech and Helen Kontozopoulos, five teams of the
University of Toronto, Canada, competed against each other in a challenge to develop an
158
entrepreneurial intelligence-based legal application, using Watson’s cognitive computing engine
through its cloud computing system. One of the team built a virtual legal research database named
ROSS. Helping lawyers reduce research time is key to the functioning of Ross, the students said. All
teams were given access to Watson on the cloud, which allowed them to feed the computer program
large amounts of text from Ontario corporate law decisions and statutes as reference material. Watson
then processed that information and the students’ application, Ross, made that data accessible to
lawyers and legal researchers. The students ultimately came second in the competition, but they knew
that as a business idea, ROSS would be a winner. They created ROSS Intelligence Inc., secured
funding from start-up accelerator Y Combinator, and set-up shop in San Francisco. ROSS is now
being used by a range of law firms who pay monthly subscription fees to use the system. It’s used by
solo practitioners who don’t have the time or resources to hire human research staff.
Sometimes university programs have been used as innovation hub to test new industry possibilities.
It is the case of the entrance of Watson into the fashion world. Before the famous appearance of the
Watson-powered dress designed by Marchesa for the Met Gala in October 2016, IBM partnered with
the Academy of Design Australia that worked closely with the IEEE-Women in Engineering (IEEE-
WIE) group to create designs that showcased a range of lighting effects that transformed the look of
the garments as well as the space, during an event with VAMFF's Cultural Program: Electronica! at
the beginning of the year.
7.3.4. Hackathon
A hackathon is a design sprint-like event in which computer programmers and others involved in
software development, collaborate intensively on software projects. Hackathons typically last
between a day and a week. In many cases the goal is to create usable software and for this reason it
is incredible stream of crowdsourced ideas. It is a perfect event for companies that want to spread the
use of a certain technology or, that want to expand the number of its applications.
Throughout the years, IBM organised a number of hackathons, involving an always growing number
of participants. These events allow IBM to explore the most diverse knowledge domains, proposing
its cognitive technology to companies’ employees, students, entrepreneurs, researchers, technology
experts and so on and so forth. Not only the contestants came from very different backgrounds and
skills level, the typology of hackathon itself has changed many times in accordance to the objectives
IBM was pursuing in every specific situation. The variables that could define the hackathon typology,
according to the study of the Watson case, are:
159
1. Assignment: se hackathon has very few constraints. The only suggestion that is given to the
participants is that they have to come up with an innovative solution based on the Watson
technology. In this case, the teams are completely free to imagine and create applications in
the most disparate fields. Even if this type of hackathons can end up with a constellation of
multiple innovative ideas, the main risk associated with them is that without clear guidance
the teams might get stuck in endlessly discussions about the right topic to choose, the right
idea to implement, and not managed to properly conceive a product or service. On the other
hand, sectoral hackathons certainly restraints the horizon of applications that can be invented
but, they are for sure a better option when the company is interested to explore a specific field,
because they offer the contestants the possibility to develop focused and more itemized
applications;
2. Contestants: it can be an open hackathon, where anyone can enrol and participate, or again, a
sectoral hackathon where only industry professionals take part, or lastly, a hackathon targeting
a specific group of people (e.g. students, start-upper) instead of a specific application field.
As for the hackathon’s assignment, the choice among these alternative groups of contestants
depends on the final objective of the competition. The most explorative ones are likely to be
open to anyone, while, when a specific objective is set, the hackathon’s attendants are chosen
according with more strict constraints;
3. Duration: hackathons are usually heap events, 24-48 hours, where participants are taken to
the end of their tether in the tentative of finishing their coding on time for the final pitch. Of
course, in such a short time, teams do not have the possibility to develop in-depth analysis
concerning their idea or to write an elegant code to embody their concept. However, the
precision and the completeness are not the main objective of these sort of hackathons, that are
instead used to find inspirations, challenges and possibilities to look for after the end of the
competition itself. Different is the objective of hackathons that take place for longer periods,
usually months. In these cases, participants are expected to come up with almost ready-to-sell
concepts. The two typologies of hackathons are organised in accordance to the life-cycle
phase of the technology in a certain market. When a market has been already penetrated and
the objective is to strengthen the company’s presence in it, the longest hackathon type is
preferable, otherwise, to explore a new market, short, impulsive hackathons are the most
suitable;
4. Location: hackathons can be hold online or offline. The online hackathons are usually those
lasting for few months, where the teams are free to work where they want and as much as they
need. An online competition, allow teams from different countries to participate, reaching a
160
biggest and more differentiated knowledge domain. However, this might entail unstable and
unpredictable results. Online hackathons usually have also a lower level of support from
mentors and IBM experts, as the possibility to communicate are reduced. Offline hackathons,
instead, are usually organised events, where many teams are reunited together, working side
by side for few hours. The size of the hackathons can vary from being hosted in a room, to
conference room, to a stadium with hundreds of contestants. Being in the same venue allow
different teams to exchange opinions and suggestions, improving each other outcome,
however, the risk is to obtain a high homogeneity among the proposals;
5. Prize: prizes can have a deep influence over the typology of contestants taking part in a
competition. For this reason, prizes should vary depending on the objective of the hackathon.
Usually, when the aim of the hackathon is explorative, and the company is in search of
inspirations and fresh ideas, prizes are of a non-monetary type. This is because people outside
the traditional technology network are more likely to be attracted by material goods or travels.
On the other hand, if the objective is to challenge entrepreneurs to come up with a truly viable
commercial idea, the prize is more likely to be monetary, usually, the amount is then used by
the winners to launch their business. In some case, even if the objective of the hackathon is
more of an explorative type, the prize can be monetary when the company wants to attract
professionals to participate in the hackathon.
Here below, a list of examples of the very different hackathons IBM has hosted during the
commercialisation of Watson.
The first-ever Watson Hackathon was held on May 4-5 2015, during IBM’s World of Watson
conference at the Duggal Greenhouse, in Brooklyn.
Working in collaboration with NUI Central (Natural User Interface), and guided by IBM experts and
mentors, the hackathon brought together, for two days of coding, 32 talented teams of designers,
developers, and entrepreneurs to develop with Watson cognitive apps. For this hackathon, it was
assigned a total prize pool of $25,000. The winners of the first hackathon were:
• Likemind: an app that uses location and published tweets to connect nearby users based on
personality match and interest match scores. The app uses IBM Personality Insights API and
Alchemy API to make the initial matches.
• NYC School Finder: a tool that uses the IBM Watson Personality insights to analyse
students’ personalities and then finds schools with similar “personality” traits. The team
used Watson Tradeoff Analytics to allow parents to compare schools;
161
• Fetch: a speech to text tool that can extract content from a PDF, using Alchemy API and
produce an analysis of keywords, main concepts etc. Queries of different content sources are
run and those results load into the app.
Key tools and documentation (e.g. webinars, starter kit) have been provided in advance to the
attendees, in order to give them all the means to arrive fully prepared to the hack and make sure to
get the most from the event.
After almost a year since IBM’s inaugural World of Watson hackathon, a second hackathon was
organized on May 21-22 2016, in a premier state-of-the-art location, the New York City’s Pier 36.
However, due to the overwhelming interest in the World of Watson conference and the hackathon,
and to ensure every interested person could join, the event has been moved to IBM Insight at the
World of Watson in Mandalay Bay, Las Vegas, on October 23-27 2016.
Other similar opportunities for programmers were the Watson Developer Challenges. There have
been many events of this kind since the launch of Watson back in January 2014, each of those with a
specific development focus. The first-of-its-kind global competition took place at the beginning of
2014 and lasts for three months. It was called Watson Mobile Developer Challenge and it encouraged
developers and entrepreneurs to create consumer and business apps made with IBM's Watson
cognitive computing capabilities. The teams, spanning 18 industries from 43 countries, submitted
more than 400 business concepts addressing a wide range of issues. The winners each received
support from IBM to advance their concepts into market joining the IBM Watson Ecosystem
Program, they worked with IBM's global consulting practice, launched in 2014, IBM Interactive
Experience, that provide design consulting and business support from IBM experts to further develop
and deploy their app commercially. The winners of the challenge were:
• Genie MD: Empower individuals to take a more active role in managing their health by
delivering a holistic view, making health data actionable and shareable. It can provide highly
relevant and personalized medical recommendations, enabling family caregivers to have
access to relevant patient data and enabling them to be more effective and efficient, as well as
facilitating better communication between the individuals and their healthcare providers.;
• Majesyk: F.A.N.G. (Friendly Anthropomorphic Networked Genome) provides an adaptive
educational relationship with a child and their parents. The first iteration is a cognitive, cuddly
plush companion, using Watson to provide a customized educational experience assisting
each child to develop through a series of contextual interactions;
162
• Red Ant: A retail sales trainer that lets employees easily identify individual customers’ buying
preferences by analysing demographics, purchase history and wish lists, as well as product
information, local pricing, customer reviews and tech specs. The app provides customized
selling points unique to that customer onscreen or via text-to-speech on an earpiece.
In 2016, the first Watson Developer Challenge took place from January 4th to February 12th, with
the aim of focusing on the next generation of cognitive apps that leverage the power of the Watson
Developer Cloud. The prize for this competition was the VR Bundles: Samsung Gear VR + Galaxy
S6, corresponding to a total amount of $3000. In order to qualify, teams must use a combination of
two or more Watson APIs to build an original hack that the panel of judges finds new and innovative.
The winner of this first event was Giflì, a team that creates an engaging visual display for oral
presentations with no preparation needed.
In the second Watson Developer Challenge, developers had the possibility to build a conversational
interface powered by Watson APIs. They had the chance to win up to $6,750 in prizes. The Challenge
started on February 29 and ended on April 15. Winners of this event were: Watson Dinner, that find
recipes by cuisine and ingredients by chatting with a conversational agent, Fetch, a food delivery
service that takes on a conversational interface and AssistMe that gives access to several bots that
provide the user with access to everything from shopping to local news in a familiar and easy to use
chat interface.
The XPrize competition is a slightly different event comparing to the other hackathons IBM hosted.
It is organised by the XPrize foundation, a non-profit organisation with the purpose of incentivize
radical technology innovations in multiple fields. Coordinators define it as an innovation engine, a
facilitator of exponential change and a catalyst for the benefit of humanity. It is a big event, lasting for many
months, sometimes years, and with a huge monetary prize. The level of the proposal is expected to
be higher than usual, even if both skilled professionals and students are expected to participate with
concrete, ready-to-sell concepts. Large companies with ambitious goals in mind, can contact the
XPrize foundation in order to become sponsor of a competition, built around a compelling challenge.
And this is what IBM did, in 2016, when it started to organise its own XPrize competition.
The IBM Watson AI XPRIZE is a $5 million AI and cognitive computing competition challenging
teams globally to develop and demonstrate how humans can collaborate with powerful AI
technologies to tackle the world’s grand challenges. This prize focus on creating advanced and
scalable applications that benefit consumers and businesses across a multitude of disciplines. The
solutions will contribute to the enrichment of available tools and data sets for the usage of innovators
everywhere. The goal is also to accelerate the understanding and adoption of AI’s most promising
163
breakthroughs. The IBM Watson AI XPRIZE is a four-year competition with annual milestone
competitions in 2017 and 2018, where the teams will be evaluated for the opportunity to advance to
the next round. The top three finalists will compete for the Grand Prize at TED 2020, where they will
take the stage to deliver jaw-dropping, awe-inspiring TED Talks demonstrating what they have
achieved. The teams will also have an option to compete for two milestone prizes. Differently from
the majority of the XPrizes, where few teams join the competition, the IBM Watson XPrize, muster
142 teams (683 pre-registered). The teams, selected by independent judges comes from 21 different
countries around the world, 10 of those are Italian start-ups. On the website it is possible to check out
the progress of every team, through direct links to their websites.
Cognitive Build is an initiative that IBM set up for involving its employees in the innovation process
through enterprise crowdfunding. As Roberto Villa said during the interview, the idea was born as a
solution to the compelling need of teaching around 400.000 employees around the world, what is a
cognitive system, how it works, what it does. As Villa said, the technological pace does not allow the
traditional teaching mechanisms to fulfil their task, and, moreover, they are hardly implementable
with such a number of people geographically dispersed. With the aim of making its employees expert
Watson users, capable of selling the technology to the clients, IBM team in charge of people
enablement, organised the Cognitive Build. Roberto Villa defined it as a mix among gaming, design
thinking, open innovation and contamination. The valuable applications that resulted from the
competition can be considered as a sort of positive externality of the process: “Applications
development has been the consequence, not the premise of the initiative. The trick was to create the
right conditions for the magic to happen, instead of forcing people to do something they might dislike,
or impose a top-down lesson in class rooms.” R. Villa.
The resulted enterprise crowdfunding is a type of crowdfunding in which the participation is restricted
only to firm’s employees. This initiative has its foundation in a theory called the wisdom of the crowd,
whose main assumption is: a large, diverse group of individuals, all with unique experiences and
knowledge, will on average make a better decision than an individual expert. This theory is made
even stronger by the fact that firm’s employees are those who better know the products and their
features and, consequently, they get a head start in the innovation concept.
Each employee who had an idea for improving company performance or for tackling business
challenges, can submit their project and had the opportunity to share and explain how their idea would
help the company. Each employee was actively involved in the process, and consequently in its
success, as they receive some money to invest in the project they preferred, and they were asked for
feedbacks to improve the chosen proposal. The project with the biggest support from the investors
164
got full funding with the double benefit, for the employee to whom was given value and for the
company who was able to exploit its internal resources. Cognitive Build was a way to flip the
traditional way of innovation: instead of giving a consistent sum of money to a small group of experts
to conduct market research and identify new ideas, the company was able to gather many ideas from
employees and put the best ones immediately into action.
In addition to all the benefits of the initiative mentioned above, Cognitive Build had the results of
getting C-level managers directly involved in the Watson transformation.
IBM Cognitive Build steps have been:
1. The process take place on the Cognitive Build website, where employees have to login with
their IBM account to access the contents. Employees can submit a proposal, become investors
or both.
Note: Any IBMer who has logged in to Cognitive Build or completed Advancing Cognitive
Build learning on Think Academy can be an investor and will automatically receive funds to
invest in projects.
2. Once a project has been uploaded and approved on Cognitive Build, it is moved to the IBM
ifundIT platform (registered in 2016) where the team starts raising funds. Each investor is
given an amount of $2000 that can be split among different projects they want to support. The
maximum amount one can spend per project is $1000, the minimum is $1. It is not possible
to invest in one’s own project. It is not allowed to contribute to a project with extra private
funding.
3. At the end of the funding campaign the ideas that receive the most support are moved to a
phase of Outthink Challenge. During this phase the team might continue the work itself or
pass it on to some other team to make into a product/public service. That depends on the idea,
the team skills, and on whether an executive inside or outside the team department picks up
the project. For this reason, it will vary on a case-by-case basis.
4. The teams are then given three weeks to create a prototype of their idea and a business plan.
5. Following the prototype phase, there is an in-person demo to IBM executives in a selected
venue. A maximum of two members from each team, coming from all around the world, will
travel to the demo and pitch in person. Before the final pitch the teams have the possibility to
receive hints and suggestions from coaches and mentors.
6. The IBM executives select the winners.
To divide the cognitive build challenge into phases, it is possible to identify four main phases,
represented in the graphic below:
165
Cognitive Build features:
• A Cognitive Build Help Forum is available to help employees understanding the Cognitive
Build logic.
• Cognitive Build is available only for IMBers but the funding process is supported by the IBM
ifundIT platform, that is available also for external clients.
• Each team is required to add an image, the team biography, and it has the option to add a video
to better present the idea to investors. The promoting phase, in fact, is entirely up to the team.
• The ownership responsibility of each project belongs to the team, who is expected to be
actively engaged in the entire process.
• There is no target goal for projects to reach in order to access the Outthink Challenge.
• While considering the different possibilities in which to invest, it is possible to “like” projects
to save them it in your personal list of favourites.
• The objective of the investors should be to make projects come to life. At the end of the
process, the company might make money thanks to the project, but the investor will not have
any financial profit.
• As many projects are similar or very complimentary to each other, teams can consider
reaching out to the other team members and work together gaining additional contacts,
expertise, resources and exposure.
The first Cognitive Build edition date back to 2016 and it was a huge success. On Roberto Villa’s
opinion, this has been a one-of-a-kind event, that will not be replicated in the future as its main
purpose, to educate employees around the Watson technology, has already been fulfilled. However,
Figure 7.4 – Cognitive Build Phases
166
the success of the event, suggests organising new internal contests, using the same structures and
tools adopted for Cognitive Build, which are still available to all the employees. Moreover, managers
said they expect to see the Cognitive Build turned into an offering in the near future to help other
companies drive innovation and engage employees.
Below facts and figures IBM chose to share on its website, regarding the first Cognitive Build:
• Objective: turn IBM into the world’s first Cognitive Business by driving a culture of
innovation.
• Participants: over 275,000 IBMers from every business unit and every country, divided into
8,361 teams took part in it.
• Number of cognitive ideas proposed: 2,704.
• Examples of tackled subjects: data security, air quality monitoring, anti-bullying, social
banking.
• Adopted approach: a combination of IBM Design Thinking with agile sprints, following the
“Think.Prepare.Rehearse.” principle.
• Invested amount: 291 million dollars through ifundIT.
• Number of finalists who undertook the Outthink Challenge: 50.
• Selected venue and date for the global Outthink Challenge: 2-3 May, Austin, Texas.
• Number of team the judges chose for the final showcase: 8.
The three teams that was awarded with prizes have been:
1. IBM Music Machine: helping record labels and artists decide when to release which songs
based on probability and myriad data sources;
2. Terminuter: a cognitive meeting minutes application which can categorize items and interact
with participants to boost productivity;
3. Pino: addressing medical conditions such as autism with cognitive-powered and personalized
verbal prompts.
Cognitive build had the effect of unifying all the employees, at all the hierarchy level, around the
same purpose. Philipp Gerbert argued believes that technology-enhanced strategy can be realized
only in the context of an integrated strategy machine: a collection of resources, both technological
and human, that act in concert to develop and execute business strategy.
167
Case Study Analysis
Details and Dynamics of
Watson’s Innovation Process
8
168
8.1. Introduction
The body of this research is articulated around a case study, which, through its mechanisms, can help
unveil the intricate dynamics and rule that companies nowadays has to follow while developing and
launching a new, disruptive General Purpose Technology. In fact, Watson is classifiable in the group
of General Purpose Technology as it perfectly meets the GPT definition given by Bresnahan and
Trajtenberg in 1996. A General Purpose Technology should possess the following characteristics:
4. Pervasiveness: the GPT should spread to most of the sectors and Watson has already proved
its applicability to a high number of sectors;
5. Improvement: the GPT should get better over time and, hence, should keep lowering the costs
of its users. Watson capabilities are getting better over time and, low cost applications have
already been developed;
6. Innovation spawning: the GPT should make it easier to invent and produce new products or
processes. The Watson technology is providing enormous possibilities of innovation in every
sector, granting companies access to improved performances and processes. The resort to the
ecosystem and the open source Bluemix platform are making it easier to invent and produce
new products or processes.
Watson has, in addition, the peculiarity of being a digital technology, a characteristic that opens up a
vast series of managerial and strategical implications.
One key feature in the success of a new technology is its diffusion degree: the higher the diffusion,
the higher the success. Throughout the years many different strategies have been undertaken by
companies trying to get their products into the market, however, quite frequently, with technological
innovation, comes the need of a business model innovation. Companies need to deeply understand
both the external context and the technology features to identify the strategy that better support its
development and commercialisation.
The case study analysis has the objective of identifying the distinguishing traits of the IBM Watson
development and commercialisation strategy, in addition to a series of factors that have enabled the
strategy implementation, in order to trace guidelines for other practitioners dealing with General
Purpose Technologies diffusion.
169
8.2. Technology Steering
In the second chapter, the discussion regarding the theory of design driven innovation, allowed the
introduction of the technology steering concept. Technology steering has been defined as the process
through which a company can identify all the different applications a technology can enable. For
General Purpose Technologies, it is inherent to have many different applications, and for this reason,
the technology steering is a natural and inevitable process for all of them. However, there is an
important aspect to remark, which is the level of awareness and the degree of guidance a company
put into the steering process. Almost the totality of the General Purpose Technologies that have
disrupted the economy in the past, from the iron discover to electricity, have achieved their success
through the independent actions of many individual players of the market. Frequently, the awareness
about the real potential of the technology remained unknown, companies adopting it, were looking to
solve specific tasks and did not even tried to understand what other possibilities the technology could
have unveiled. A first level of awareness has been experienced concerning the more recent wave of
General Purpose Technology licensing. However, as explained in the fourth chapter, the process that
lead to this routine followed an inverse path. Companies developing molecules for drugs understood
that very specific products were bringing them too little revenues from the licensing, as few licensees
would pay for the exact same product. For this reason, they adapted their product to the business
model, developing more general molecules, licensee would have implied in many different ways. The
comparison between the technology licensing business model and the technology steering model,
paves the way to important literature implications concerning the commercialisation of General
Purpose Technologies. In fact, it is possible to highlight one remarkable difference between the two.
This difference relies on the fact that in the first case, the involvement of the company that own the
technology ends with the developing phase, while with technology steering, the owner company,
participate actively in the integration of the technology into marketable products. With the licensing
business model, everything from the concept to the commercialisation of a technology application is
in charge of the licensee, and the owner company has little leverage, and little interest, in deciding
what kind of applications is its technology suitable for. Once the development phase of a certain
technology comes to an end, the company undertake a passive role, benefiting from the revenue
stream brought by the licensing and, occasionally, investing in developing other technologies. On the
contrary, the technology steering sees the owner of the technology on the front line of the
commercialisation process, striving to identify the most profiting applications, to discover new
market possibilities and to integrate the technology into the most performing products.
170
The behaviour of a company trying to steer a technology has not been deeply investigated by now,
and the Watson case study, made it possible for the first time to analyse the complex dynamics of the
technology steering process, from the point of view of a single company, struggling to identify all the
possible applications the technology can embed. The steering model offers many valuable sparks
suggesting its taking over as preferred commercialisation strategy for General Purpose Technologies,
surpassing the present licensing model.
An important aspect of the diffusion strategy is the level of disclosure the company agrees to have
with the external world. In practice, when a new technology is developed, the owner company has to
decide whether to keep secrecy about the mechanisms and functions of the new technology or to
disclose them to the large public. The choice of the best strategy should depend on the degree of
appropriability of a certain invention: the lower the degree of appropriability the less the openness
related to the technology. As already discussed in the literature chapters, when a technology has a
general purpose, meaning, it is adaptable to many different fields and applications, the appropriability
constraints is less tight because the company has many possibilities to generate revenue streams. This
is one of the main reasons because in the Watson analysis it is possible to highlight many different
situations throughout the innovation process, in which the company chose to have a high level of
disclosure, to give an easy access to the users to all the information related to the technology features
and functionalities.
Figure 8.8 – Company’s Involvement in the Technology Licensing and Technology Steering Process
(adapted from Verganti, 2009)
171
Talking about the innovation process the Watson technology went through, it is interesting to note
that it followed a longer and symmetric path comparing to the traditional innovation funnel.
Traditionally companies used to test different options before choosing the most promising one, the
one from which they could extract the highest revenue stream. It is a convergent process. In practice,
in the initial stages of development, some ideas are collected which go through a refinery system.
After refining, a few ideas are left which the company implements. The ideas that remain after
refining are combined to come up with a new concept. This is the end of the traditional innovation
process. The ultimate goal is to ideate a new concept. In the case of “market for technology”, the
process converges even before the birth of a new concept, it ends with the final development of the
new technology. This is what happens for technology licensing.
However, in the case of technology steering, the objective is to come up with a myriad of new
concepts, new products and services enabled by the new technology. For this reason, the innovation
process does not end with the development of the technology, but instead, it represents the milestone
for the beginning of a brand-new phase of the process. This new stage has, as already said, an
explorative aim. The company is looking to find all the different applications the new technology can
give birth to. Every new application developed, whether it is in a penetrated field or in a new one, has
the effect of enlarging the company portfolio. This process of adding new applications time by time
can be represented as an inverted funnel, it has a divergent nature. A direct measure of the divergence
can be found in the analysis of the ecosystem growth: in a single year, from 2016 to 2017 the number
Figure 8.9 – Innovation Funnel (adapted from Philips J., 2011)
172
of applications populating the ecosystem is almost quadruple, increasing from around 680 to more
than 2400.
Considering Watson’s history, the company passed through an initial phase of the technology
development when researchers struggled to identify among a bunch of technology, which one would
have better provide the desired performances. In fact, as described in the sixth chapter, in the early
stages of the research, at least until 2006, the team efforts failed to produce promising results. Only
after having investigated many unsuitable alternatives, by the end of 2007 the team had gradually
took a set of promising choices, like adopting the DeepQA architecture, which was an important
milestone of the convergent process. The identification of the most appropriate technology, and the
development of Watson’s core functionalities marks the end of the traditional convergence phase. In
a second phase, IBM started to define the marketable options Watson could have made possible. As
Roberto Villa said during the interview, the exploration of Watson’s possibilities started from the
health industry, because it presents some characteristics that perfectly match the technology skills:
“The health industry has a multitude of data to analyse (medical records, radiographies, etc.), these
data are typically out of control (patients’ medical history is often unknown), new data and discoveries
are continuously generated and it is impossible for doctors to keep up with all the new material, they
are not able to offer their patients the most advanced techniques because they do not know them. The
result is that there are very few excellences in the world, the majority of health services are mediocre.
Thanks to Watson, it is possible to take the excellence and, through the technology, make it available
to anyone”.
Even though some industry possesses the characteristics that are particularly suitable for the
application of Watson, its potentialities have not been limited to these few, obvious market options.
The Watson technology is a technology that can be applied almost to every existing field, its
generality of purpose makes it suitable to be integrated in an almost unlimited number of products
and services, and this is mirrored by the strong divergence of its explorative phase.
173
8.2.1. Dynamics of the Diverging Phase
As it happens for the traditional converging phase, also the divergence level depends on a time factor.
The quicker the company is able to find valuable applications for the technology, the wider the
divergence. In the Watson case, the first phase of the divergence process, saw the company focusing
its efforts on the most promising markets, the natural options for the application of the Watson
technology. As already said in the seventh chapter, during the interview with Roberto Villa, he
explained how the Watson moved its first steps into the market: IBM looked for those conditions that
could be leveraged to create value. In the Watson case, these conditions are: availability of a large
data set (Big data), and the need to manage critical situations. With these premises, the natural
candidate for Watson’s first commercialisation attempt, was the health sector. In 2011 the company
announced its first commercial partnership. In this early stage of the market exploration, IBM was
building partnerships through licensing contracts. The developing effort was still extremely intense,
and the number of partners grew slowly in the first three years from Jeopardy! victory. After this first
period of technology consolidation, the company started to explore new possibilities and to enlarge
the market horizon. They decided to revolutionize the business model associated to Watson, adopting
a cloud platform. Differently from the licensing, the platform was a tool anybody could easily have
Figure 8.10 – Convergent and Divergent Phases of the Innovation Process
174
access to, even with a small budget at their disposal. The company also decided to create an ecosystem
where users can share their knowledge, supporting the diffusion process enabled by the platform. The
development of the platform signed a fundamental milestone of the divergence process. From that
moment on, the divergence process took a different pace, it accelerates bringing to the quickly
creation of many new Watson applications.
Through the analysis of more than 300 among websites, online articles and press releases, it was
possible to reconstruct the chronological evolution of the technology market expansion. Among all
the analysed sources, around 130 have been successfully implied to build the Watson’s timeline.
These sources, used during the analysis, are mainly ascribable to the following four categories: IBM
press release, official company’s websites, sectoral business and technology websites (e.g. Forbes,
edTechMagazine) and general press (e.g. Corriere della Sera).
Through a careful analysis of the publication dated or the dates reported in the articles’ bodies it was
possible to create a chronological history of the Watson evolution, which is reported in the timeline
below. In less than six years from Jeopardy! Watson was able to establish its presence in more than
30 industries, keeping in injecting new applications into every penetrated market almost every year.
Figure 8.11 – Sources for Watson Chronicle Evolution
175
Figure 8.12 – Watson Steering: Application Development in Many Different Fields
It is interesting to analyse the strategy IBM implemented to steer the Watson technology. Apart from
the health sector, which is the first industry Watson explored and the most permeated, with many
fully developed applications, the other markets have only been approached. This means that the
majority of the investments IBM put in place, where devolved to the investigation of new,
unconquered industries. Instead of developing new applications for the industries already tackled, to
obtain a deeper impact on them, the company diverted its efforts through new potential markets. This
does not imply to completely neglect the markets already approached, which see the development of,
at least, new applications almost every year. As it is possible to note looking at the evolution timeline
above, Watson accomplished a quite astonishing result in such a short time. This lead to a conjecture
about the strategy the company is pursuing. The company’ strategy to approach many markets instead
of focusing on few ones, is probably intended to last for the early stages of the technology
commercialisation. Once the expansion effort will be over, the company will focus the investments
in actually penetrate one or more of the most promising approached industries. Even though the
validity of this conjecture will only be proven once the evolution will come to an end, it is possible
to identify some clues in its favour. It seems reasonable, that the reason why IBM is moving so fast
to approach many markets in the early phase, rely upon the nature of the Watson technology itself
176
and on the business model adopted. In fact, two prominent phenomena are triggered by these factors,
and well fit the above mentioned, strategy:
1. Interdisciplinary: as already said in the fourth chapter, the arrival of a new GPT does not
immediately translate into a higher productivity, instead, the initial impact of a GPT on overall
productivity growth is typically minimal, generating a slowdown phase. Helpman and
Trajtenberg (1998) argue that the slowdown is caused by an initial lack of complementary
inputs. During the interview, Roberto Villa highlighted a new element, characterising today’s
businesses, that is important to take into consideration while considering the development of
complementary inputs: the interdisciplinarity. As he stated, since few years ago, industries
undergo silos dynamics, while nowadays, businesses are driven by mutual contamination and
growth often relies on the companies’ ability to extract value from many different fields. A
key advantage in operating across markets is to obtain expertise and apply new and best
business practices across industries, which can be considerated as interdisciplinary positive
externalities. By learning from best practice, sharing ideas from different markets and
business cultures, the company can create synergies among different industries that lead to a
faster development of technology complementary inputs. An interesting example related to
the synergic effect is offered by the Yoox case, Roberto Villa told during the interview. Yoox
had the need to develop a cognitive system able to independently process a large number of
clothing images and classify them into pre-assigned sales categories. The company had the
possibility to exploit an API for visual recognition developed by a team working on
oncological research. The interdisciplinarity enabled by Watson, allowed a quick answer to
the customer needs, speeding the innovation process. Villa said: “The learning context
originate from the API practical usage”;
2. Ecosystem effect: the ecosystem, supported by the IBM business model, is able to nourish
itself, connecting other business to the technology. It is possible to find many references to
this effect in literature. Back in the 1996, Moore’s defined business ecosystem as an “extended
system of mutually supportive organizations”. IBM knew that by relying on traditional
internal processes and practices for R&D and innovation, they could not realize the fullest
potential of Watson breakthrough, they could not create and capture a significant portion of
that value. Advances in technology, especially digital technology, and the increasing role of
software in products and services, are demanding that large, successful organizations increase
their pace of innovation and make greater use of resources outside their boundaries (B. Power,
2014). This means internal R&D activities must increasingly shift towards becoming
177
crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and
entrepreneurs. The idea of a business ecosystem is thought to help understand how to thrive
in this rapidly changing environment. The ecosystem objective is the creation of a platform
where many actors are connected by the use of a certain IBM technology that is key for their
business success. In this way IBM creates a network of companies to whom easily provide
support and education, a network where companies can help each other in the transition to the
new technology through forum, a community that is easy to identify for other potential
members, as well as for customers. It is a sort of window for the world, as anyone can become
aware of its existence and can become part of it. Moreover, according to Iansiti and Levien
(2004) a critical success factors of a business ecosystem, is its ability to create niches and
opportunities for new firms. The fact that the ecosystem acts and grows with a certain level
of independency, without the continuous intervention of the technology provider, makes this
model particularly suitable for the early stage expansion. It is logical to assume that every
time a new application field enter in contact with Watson, it is then affected by ecosystem
effects. Even tough in the course of this research it was not possible to define the exact
evolution of the Watson ecosystem, from the data available on the website, it was possible to
highlight a considerable increase in the number of companies belonging to the ecosystem:
from 2016 to 2017, the number of applications has grown from around 685 to more than 2400.
Summing up what has already been said about technology steering, to steer a technology, a company
has to face a divergence phase which follows the traditional convergent innovation funnel. Moreover,
it is possible to highlight that the strategy IBM is leading in the early stages of the technology
commercialisation, has mainly an explorative character, meaning that, they are tackling many
different industries instead of focusing on a specific one to exploit it. Note that, as above discussed,
the explorative character of the present IBM strategy, does not entail neglecting the markets already
approached, but that the exploitation is not of an intensive nature. It is interesting, though, to take a
look at how the divergence phase has been pursued until now. To explore the market four different
activities, whose details have already been discussed in the seventh chapter, have been put in place:
researches, partnerships, university programs and hackathons. These four different activities, have
been contemporary adopted by IBM, that have been leveraging them to support different company’s
objectives.
From the case analysis, it was possible to assess the different strategies in order to identify the
situations in which they have had the biggest impact. In particular, the analysis focused on identifying
178
the activities that acted as enablers in the development of the first application in a completely new
industry, or the enablers in the development of the second application in an already approached
industry.
What emerges from the analysis, is that the initiative that has led to most of the application
development is the partnership, making it the most successful and secure option. However, in the case
of the development of the first application, the bearing of, mainly, other two initiatives, university
programs and hackathons, equals the one of the partnerships. This fact brings important inferences
regarding the strategy a company should adopt while exploring technology possible applications. In
fact, even though partnerships are certainly the safer option to choose, as both of the companies
involved in the agreement have a strong interest in succeeding in their purposes, at the same time,
they are the most costly option, as many resources has to be devoted to achieve the expected results.
Moreover, very often, partnerships are established with traditional market players. It is tough to exit
the usual industry boundaries to partner with some unexpected product or service provider. For these
reasons, in order to fully explore the technology possibilities, while keeping an efficient use of the
company’s resources, it can be useful to flanking the traditional partnership approach with other
strategies. To ground this theory to the case study, looking at the Watson evolution, partnerships have
been mostly used at the beginning, for the launch of Watson because there were still a lot of research
needed to be able to create viable Watson products, plus, partnerships lowered the initial risks.
Hackathons and university programs, instead, were mostly used to get inspiration about new possible
markets to explore. In fact, hackathons and university programs are seen by the companies as low-
cost, valuable mechanisms for ideas crowdsourcing. The cheapness of these initiatives is a
fundamental aspect of the business model, as the likability of identifying truly valuable ideas or
applications increases with the number of events. As explained in the seventh chapter, these events
can bear different levels of constraints, a variable that strongly influences the nature of the outcome.
For creativity’s sake, an open delivery, which attracts many diverse participants, is the most suitable
option. This kind of event can be adopted by a company, if it has no clue about where the technology
can aim, or, if it simply is open to all the possibilities. Constrained deliveries can be used to test the
viability of a specific segment. Hackathons and university programs have been a good solution during
Watson’s commercialisation, to explore many different market options, keeping a low budget that
allow the efficient confluence of the company’s resources on the more substantial partnerships. For
what concern the second developed application, instead, the analysis of the Watson case shows a
complete predominance of the partnerships as the preferred way to consolidate the company presence
in a market. While startups are usually involved through hackathons, partnerships are frequently
established with big market players. Examples of the most important partnerships in which IBM took
179
part in the last few years are: Lufthansa, Johnson&Johnson, Hilton, Yoox, Honda, Visa, Macy’s, 20th
Century Fox, Condénast, Capgemini, Technogym, Cisco and Toyota. To partner with a big market
palyers has two main advantages: to influence a large customers’ share, and to have a strong
marketing echo.
For what concern the research activities, what emerges from the Watson analysis, is that during the
divergence phase, research is almost always exploited within partnership agreements. In particular,
research has been strongly leveraged within the firsts partnerships established. In fact, these firsts
collaborations were characterised by the refinement of the rough technology, in order to fit into
marketable applications. In a second time, research has been devoted to the development and
improvement of specific APIs, in order to meet customers’ needs.
In conclusion, following the IBM technology steering strategy, it is possible to draw the following
division:
1. Partnerships are the preferred strategy in the early stages of technology steering, usually when
the selected market is a natural candidate for the technology application, or when the
company needs to consolidate its position within an industry;
2. Hackathons and university programs are useful to test unusual and unexplored industry
possibilities. The main difference between these two initiatives is the duration, which is
usually higher in the case of university programs;
3. Research is mainly leveraged at the beginning of the divergence phase, usually, jointly with
partnerships. Its main purpose is to refine the technology and tackle those weaknesses that
can threaten its successful diffusion.
The just presented results about IBM technology steering strategy, are the outcomes of the in-depth
analysis conducted about Watson chronological diffusion, which, as already said, was built by
investigating more than 130 articles and reports on the web. Thanks to the timeline analysis, it was
possible to obtain useful insights not only regarding the strategy the company adopted to tackle
different application fields, but also, to identify what kind of strategy IBM followed within a given
field, to subsequently develop many applications. The research carried out, however, was not meant
to be exhaustive of all the developed applications in a given field, instead, it was deemed to be a
useful tool to understand how Watson moved its first steps during the market exploration, in every
field. For this reason, every time the research found that a field had been approached for the first time,
meaning, a first application had been developed, it was conducted a focused research to uncover if
other applications have been developed, every year since the first one. Practically speaking, this
analysis offered the possibility to if IBM maintained the same strategy during the exploration of the
180
different fields. This question has just been answered in the previous paragraph where the differences
among the four activities IBM implemented during the exploration have been sated. Moreover, here
below there is a chart that highlights the percentage of application fields that were tackled, for the
first time, and for the second time, with one of the four activities:
Figure 8.13 – Contribution of Different Activities to the Development of Firsts and Seconds Applications per Field
8.2.2. Characteristics for Implementing Technology Steering
Thanks to the analysis of this case study, it was possible to identify new dynamics of the General
Purpose Technologies development and, mainly, about their commercialisation (the integration of the
developed technology into marketable products).
Talking about the technology commercialisation, it is possible to dig further in order to identify
differences between the technology licensing and technology steering models. Quickly recalling the
“Market for Technologies” concept, already discussed in the fourth chapter, they are defined as an
innovative practice firms adopt when they sell rights for their intellectual property rather than
themselves directly commercializing products and services based on their knowledge capital. The
dominant business model in market for technologies is based on the idea of developing a technology
for licencing to downstream specialists. These business model is becoming popular also in
181
commercializing GPTs, because the fact that they are constructed in a way that can be employed by
different potential downstream licensees, make licensing particularly profitable.
It is possible to identify three main market and technology characteristics to successfully implement
technology licensing:
6. Market fragmentation: the higher the fragmentation (geography, industry, etc.) the higher the
licensor’s profit appropriability;
7. Company’ size: the bigger the licensor the higher his bargaining power and the possibility of
making profit;
8. Technology applicability: the wider the number of the technology applications, the higher the
profitability from licensing (overcome profit stifling because of spill overs).
In the study of the Watson case, it was possible to validate two out of three of these characteristics
also for the technology steering model. Being a General Purpose Technology, Watson can find many
applications (3) in many different industries, and in doing so, it obtains the necessary market
fragmentation (1). For what concern the company’ size, IBM is more than a large company, it is a
colossus of the industry, and it could have certainly leverage it to increase its bargaining power,
however, in the IBM technology steering model, the bargaining power does not play a relevant role,
as, one of the principle on which the model is built, is to be available to anyone, at a fair price. This
principle is the exact opposite of what licensing does: the licensor, wants to find those élite companies
that can afford to pay a high fee in order to access the technology. In the technology steering model,
the owner company wants to make the technology accessible to the highest possible number of users.
For these reasons, it is possible to transform the second characteristic, in the case of technology
steering, in technology accessibility: the more the technology is accessible, the higher the number of
users and consequently, the higher the profit.
Thanks to the study of Watson, it was possible also to identify other characteristic of the technology
that are fundamental to successfully apply a technology steering strategy:
1. Technology scalability: the technology ability to change its scale in order to meet growing
demand;
2. Technology adaptability: the technology ability to adapt in order to match specific client’s
needs, easily adjustable or enlargeable, by adding features;
In the case of Watson, IBM obtained the scalability resorting to the cloud platform, in fact, users can
pay just for what they are actually using, with the safety of a quick an easy performance or storage
increase in case of demand peaks. The technology adaptability instead, is a feature inherent to the
way applications are built. As explained in chapter six, the building blocks of any cognitive
182
application are the APIs. Users can purchase only the APIs they need, but they can then add new APIs
in a second time if they want to modify their offering. Besides, new APIs can be created ad hoc for
any specific need.
8.3. Enabling Factors
As discussed in the previous paragraph, IBM applied four different activities in order to pursue the
diversion phase. However, these activities would have not been sufficient to successfully manage the
diversion process. Through the study of the Watson case, it was possible to identify five enabling
factors which played a fundamental role during the diversion, complementing and potentiating the
main activities. Here below the list of the enabling factors:
1. Sale strategy: as is known, after an initial phase of licensing, Watson’s offer is currently
provided through a cloud platform. The shift in the sale strategy had an enormous impact over
the diffusion of the technology, which consequently shifted from being an elitist technology,
only available to large, wealthy companies, to become a tool accessible to a large customer
base. The choice IBM made, was certainly suggested by the current trend of cloud computing,
which is now evolving like never before, with companies of all shapes and sizes adapting to
this new technology. Industry experts believe that this trend will continue to grow and develop
even further in the coming few years. With these premises it seemed like the only valuable
option for Watson commercialisation. In fact, if partner companies start using it properly,
working with data in the cloud can vastly benefit both IBM and the application developers.
Mentioned below are some of the advantages that this technology offers to its users:
a. Cost efficient: it is probably the most cost-efficient method to use, maintain and
upgrade. For these reason, it can significantly lower the company’s IT expenses,
comparing to traditional desktop software obtained via licensing fees;
b. Almost unlimited storage: using a cloud platform, the user has no more need to worry
about running out of storage space or increasing her current storage space availability.
The scalability of the storage is an inherent feature of the solution;
c. Easy access to information: once the user register into the cloud platform, she can
access the information from anywhere, the only thing needed is an internet connection.
This convenient feature lets the user free to move beyond time zone and geographic
183
location issues. Moreover, it is an easy manner to provide the user the information she
need to start developing a new application;
d. Quick development: one of the most important characteristics of the cloud platforms
is that they give users the advantage of quick deployment. From the registration, the
entire system can be fully functional in a matter of a few minutes, and from that
moment on, the user can “forget” about the IT issues and focusing completely in
developing her business application.
Considering this list of advantages, it is clear, that cloud platforms are true enablers for start-
up businesses to which offers an essential differentiator. Through the cloud, IBM empowers
anyone with an idea to start a business, get it up and running quickly on an enterprise-grade
IT infrastructure that has the flexibility to accommodate growth (scalability), yet requires
minimal up-front capital expenditure. The success of this method can be easily assessed by
looking at the increasing interest users have demonstrated in the hackathons participation.
From the first hackathon, dated back in 2014, when 32 teams took part to the challenge to the
ongoing IBM XPrize, with more than 680 teams registered. Another measure of the
importance of the adoption of the cloud platform can be found in the number of new
application fields Watson was able to approach every year during the divergence phase. As it
is evident from peak observable in the diagram below, the introduction of the cloud platform
in 2014, boosted enormously the exploration effort, that had been limited since then.
Figure 8.14 – Timeline of the Watson’s Expansion in Different Application Fields
184
2. Product improvement: the building block of any Watson application are the APIs. As of their
nature of being small components of a much larger system that is the application itself, they
can be arranged in many different ways in order to produce the desired outcome. A user can
select the building blocks she need to create an application and, if some fundamental feature
is missing, it is possible to develop it from scratch. The development phase frequently is
coordinated by IBM itself, but an independent user can decide to go solo. However, the new
API is developed, it undergoes a standardisation phase, which makes it available to any users
on the platform. Nourishing the set of available building blocks, allow the users to access
every time the newest APIs, obtaining augmented potentialities or innovative features, that
can improve their applications. The sharing of every new born API on the platform makes the
diffusion process a lot faster than in the licensing case where every licensee has to develop
the feature needed from scratch, without benefiting from others’ investments. Phil Westcott,
European Ecosystem Leader at IBM Watson, said: "We now live in an ‘API economy’ where
anyone with a vision can rapidly deploy ground-breaking applications without the need for
huge capital investment in IT infrastructure, nor an army of software developers... This is
allowing small businesses to compete and disrupt in every industry. Entrepreneurs and
developers can rapidly prototype cognitive solutions powered by the Watson API services.
We hope everyone taking part will leave empowered by with a new range of skills, knowledge
and passion for building cognitive applications.”.
3. Promotion strategy: once the Watson technology had been developed, IBM needed to build a
bridge between the product and the customers. Being a large company, whose brand is already
familiar to the consumers, IBM had an important advantage as every action would have a
worldwide echo in many industries. However, the company had to choose among different
strategies, which was the best way to reach potential users, and what the message should be.
Among the many actions that shaped their promotion strategy, there are two aspects that have
been particularly impactful:
a. Communication: in the attempt to reach potential users dispersed all around the world,
IBM chose to organise a local event, with such a huge following, that it appeared like
a worldwide thunder. The event was called, World of Watson. As already discussed
in the previous chapter, World of Watson conferences have been growing
tremendously since the first one in 2015, becoming extremely large events, able to
host thousands of people. Moreover, IBM gives the possibility to follow the main
speeches via streaming, all around the world, obtaining an even wider pervasiveness.
Finally, for the few ones that miss the chance of attending the event, there is plenty
185
of articles and reports from the most important business reviews all around the world,
spreading information regarding Watson and its further achievements. This level of
capillarity would be hardly achieved with any other communication methods,
certainly, not at the same budget. The events’ allure depends much on the CEO’
speech, which is usually echoed through social networks for days, and the fact that it
is bringing users, experts, and industry leaders together to learn from each other’s
experiences and best practices, inducing the entire system to move forward.
Conferences offer IBM the perfect stage to reinforce its slogans and the image it wants
to provide the world, plus, it can persuade attendees to think differently about their
data and applications, and to expand their company’s horizons with Watson;
b. Branding: IBM underwent an important re-branding campaign in the last decade,
trying to overcome the traditional, rigid and conservative culture that characterised
its business for many years. The re-branding strategy is two-folded: it is constituted
by an internal transformation (5) and a set of external communication tools. One of
this tool, is the name of IBM’s newest and more valuable technology, which happens
to be the focal point of this research: Watson. Watson, which is not named after a
Sherlock Holmes character, as someone incautiously declared (Social Capital CEO
and founder Chamath Palihapitiya in May 2017 during an interview at CNBC), was
chosen after IBM's first CEO, Thomas J. Watson. Even though the choice has not
been deliberately made with the intent of attracting customers in a playful way, it
certainly sounds more friendly and catchy than its predecessors (e.g. the IBM
system/360), especially to a non-business audience. IBM new that the key to
successfully leverage the new technology was to involve non-traditional partners, to
attract users from distant industries that could bring fresh knowledge to apply to the
application development. Using a warm, sympathetic name, easy to remember, surely
help them build the ecosystem base, attracting developers and start-uppers, besides
entering the spoken language;
4. Customers’ education: IBM devoted many resources to teach potential users and developers
the secrets to master cognitive systems. This initiative is not surprising given the fact that
researches shows that the most successful among the new products launched on the market
have clearly benefited from customers’ ease in learning the benefits and applications of those
products quickly, in finding out how the devices actually work, and in sharing their knowledge
with friends (M. Evans, M. Jamal, 2006). In fact, the very success of a tiny percentage of new
products (more than 95 per cent do not succeed) underlines the reality that marketers need to
186
teach customers and find ways to advance their learning. If not, these marketers risk having
their products languish. By not effectively teaching customers, manufacturers are letting them
learn on their own, at their own pace and with uncertain learning, and purchase, outcomes.
Researches suggests that consumers can and must be taught and offers a method of teaching
that is thorough, fast and effective (G. R. Morrison, 2004). In the last century, product, service
and process innovation was a fairly slow-moving affair. Customers had time to adjust their
attitudes to incorporate the changes companies initiated, an easy-enough thing to do because
the changes were often not transformative. Companies today can no longer rely on a small
percentage of early adopters to create a market and future success by opening the minds of
the next cohort of customers. With short, competitive windows and technology cycles, a high
percentage of customers must embrace the technology quickly. In a world where change is
always accelerating, consumers must become actively engaged with innovation rather than
remain passive users. This highlights the importance of companies developing pedagogy to
help customers learn. It also emphasizes the need for marketers to become teachers, setting
learning objectives for their customers and planning customer goals in each phase of the
innovation process and across all channels of customer connection. As detailed in chapter
six, IBM developed a large and differentiated learning offering to address users’ and
developers’ needs. The offering includes online tools, like webinars and manuals, as well as
personal assistance and coaching. To this purpose, IBM heavily leverage the ecosystem
dynamics of sharing and increasing the knowledge, and the know-how, around a certain topic.
Getting back to the technology steering process, and the differences between it and technology
licensing, it is possible to highlight a further aspect linked to the divergence phase. In fact, as
already said, technology licensing is deeply rooted in the secrecy around its technology
mechanisms. Licensors usually struggle to release as few information as possible to external
actors. To access the entire body of knowledge they have to previously pay the licensing fee.
On the contrary, any potential user can access an important body of knowledge regarding
Watson, before actually entering the ecosystem. This difference between licensing and
steering leads back to the different process: convergent for the former and divergent for the
latter.
5. Internal transformation: as stated many times, the success gained by Watson would not be
appropriately sustained without a needed internal transformation. The company is gradually
dismissing all the old technologies and focusing its resources entirely on Watson development
and commercialisation. To make this shift effective, executives needed to drag the entire
organisation, nearly 380.000 employees, located in 170 different countries, through the new
187
direction, aligning objectives, believes, incentives and culture. The instrument IBM turned to,
in order to accomplish this titanic task, has been the renowned Cognitive Build. The need of
involving the entire organisation in the new Watson technology, is a direct consequence of
the technology steering strategy. In this kind of strategy, the company keeps ownership over
the technology’s applications. For this reason, when the diversion phase begins, the company
needs to have all the employees actively involved in the innovation process. They should be
able to understand the dynamics of the technology, to know how to use it, in order to sell it
and offer consulting services to the users, but, most importantly, they need to embrace the
technology values, culture and meanings in order to successfully pursue the company vision.
8.4. Watson Network
Traditionally, inventions, were generated by a company’s own researchers, the firm’s engineering
department realized the transition of ideas to commercial products, and the diffusion and exploitation
of innovation was driven by the innovating firm itself. Recently, the strategy to access knowledge
resources externally has been emphasized, as knowledge is growing faster, and clusters of highly
specialized knowledge are globally dispersed. Opening the firm’s boundaries to external inputs in a
managed way enables companies to realize radically new product innovation. Practically, external
sources of knowledge and innovation have become increasingly relevant (Porter and Stern, 2001).
According to Grassmann (2006), the more an industry’s idiosyncrasies correspond to the following
developments and trends, the more appropriate the open innovation model seems to be:
1. Globalization: globalization has not only lowered entry barriers for new international
competitors by decreasing cost pressure, but also provides the companies that can innovate
faster and are able to adapt better with an opportunity for competitive advantage. Global
industries favour open innovation models because they achieve economies of scale more
swiftly than the traditional closed model and promote more powerful standards and
dominant designs;
2. Technology intensity: in most industries, technology intensity has increased to such a
degree that not even the largest companies can cope with or afford to develop technology
on their own. The reasons are due to the lack of capabilities to cope with all upcoming
technologies and to the lack of financing to exploit them alone;
188
3. Technology fusion: industry borders are shifting or even disappearing. For example, IBM
itself, is ranked eighth in a list of the world’s largest holders of biotechnology patents. The
more interdisciplinary cross-border research is required, the less a single company’s
existing capabilities are sufficient to provide successful innovations;
4. New business model: with the rapid shift of many industry and technology borders, new
business opportunities arise, bringing together firms active in very different sectors. The
main motives for these alliances are the sharing of risks, the pooling of complementary
competencies, and the realization of synergies, which are the main characteristics of
innovative business models;
5. Knowledge leveraging: knowledge has become the most important resource for firms.
Instead of hiring the best engineers internally, companies are forced to act as knowledge
brokers. New capabilities and organizational modes are needed to cope with this outside-
in thinking.
Considering this list of trends and how perfectly they fit into the Watson case, it was unavoidable for
IBM to adopt an open innovation model. However, for what concern the degree of IBM involvement
during the innovation process, it is possible to highlight a heavy difference in the typology of effort
the company has put in different phases. During the technology development (convergent phase),
before Jeopardy! and in the period right after winning the game, IBM invested heavily in research.
The partnerships were built around research agreement and IBM invested enormous quantity of both
money and resources.
During the commercialization phase (divergent phase), IBM continued to invest heavily in the
functionalities and business model characteristics, that still need to be improved before striking the
market. For example, IBM is still developing new or improved APIs. However, it moved to a more
supportive role in the fields where the technology is already able to stand on its own feet. It is possible
to conjecture that the applications that were developed within partnership agreements, involved a
higher IBM engagement also during the divergence phase, leveraging its research facilities, while,
when other application development strategies were employed, like for example hackathons, the
company involvement was limited to the organisation of the event and to provide support to the teams
during the competition. Again, if from a hackathon arose the chance to develop a marketable
application, IBM involvement would change its nature from supporting to actual developing.
Considering an innovation timeline, it is possible to identify two main phases, the first which see a
higher involvement of IBM internal resources in the developing effort, and a second one where the
189
technology and application development is carried on mainly by external actors leveraging IBM
support.
8.4.1. Partnerships’ Evolution
As already said, partnership have had a fundamental role throughout the Watson innovation process,
not only during the technology steering and the commercialisation.
Innovation alone is a herculean task, but being the upstart pioneer trying to develop the technology,
while at the same time going up against entrenched, powerful competitors with deep industry
knowledge, assets, and channels who have been around for a hundred years or more, seems like an
impossible job. However, this is the challenge that all kinds of disruptors have to face, or, more
precisely, it is the challenge that disruptors should manage. The fact is that going it alone, is definitely
not the way to go at all. Collaboration is the essential new secret for start-ups and industry leaders
alike. For true disruption to take hold, old and new must work together, playing to each other’s
strengths.
Based on a recent research conducted by the Harvard Business School, where they had the possibility
to work with CEOs from large companies, start-up founders and venture capitalists around the world,
it emerges that collaboration are currently outclassing acquisitions or build-it-from-scratch start-ups.
Partnerships nowadays are not just an exchange of money, or the traditional “equity investment,” but
a real exchange of ideas and means, and the vision to achieve distinct goals. To revolutionize old
Figure 8.15 – Phases of the IBM’s Involvement During the Innovation Process
190
industries, small and big companies alike must get past competitive worries and embrace their
strengths and weaknesses. Collaborations can take many different shapes, and are conceived as new,
unique partnership models in which corporations bring assets, the ability to rapidly test and scale, and
a deep understanding of the regulatory landscape. On the other side, start-ups, inject new technical
expertise, and venture capitalists offer funding and access to new talent.
IBM has demonstrated a big value and a certain wisdom in recognizing what its core strengths are,
and that it cannot be the best at everything. They acknowledged that there is wisdom and experience
outside the company’s boundaries that can get them to a final solution far faster than if they were to
go it alone. For a corporate giant like IBM, a strategic vision for corporate venture investing is critical,
not just for innovation, but for a new, better way of doing business. Innovation is inherently risky and
unpredictable, but companies can improve their odds by reimagining the entire approach in order to
come up with a clear strategy, a dedicated team, a diverse portfolio of unique partnership models, and
a strong capability to scale new technologies and business models into the core business. IBM
understood that the traditional model of innovation was no longer adequate, instead, the entire
ecosystem of firms and players from different markets must work together. To foster the birth of a
collaborative behaviour among firms, in fact, they developed an ecosystem, based on the use of the
cloud platform for developing Watson’s applications.
Considering the Watson case in a chronological order, external partners have been fundamental actors
of Watson’s development since the beginning of the innovation process, they had an even greater
involvement into the project comparing to the company’s own employees. In fact, the technology
diffusion experienced inside the company, came after a series of external partnerships were already
in place. For a long time, Watson has remained locked into research laboratories both in IBM facilities
and in those of partners. Only after Watson was obtaining its firsts results in the market, introducing
new applications in different fields, all the IBMers got in touch with the new technology and the
company has been involved as a whole. The fact that IBM addressed its own employees only in a
second time of the process, during the beginning of the diversion phase, leads to three considerations.
First of all, they waited until the initial risks connected to the innovation process had been lowered.
Being aware of the huge entity of the effort needed to convert the entire company to a new technology,
they did not rush into it before it was the right time in the innovation process, which is the moment
they were starting to expand their market reach, not when they were deepen their technology
understanding. Secondly, waiting for the last minute to get employees on board, resulted in a strict
time constraint: with the market ready to be explored and already growing in some field, IBM had to
adopt a striking strategy to quickly involve and update its employees: this was accomplished through
the well-known, cognitive build. Third, the delay in the involvement of the employees, in favour of
191
external partnerships, highlights the importance of leveraging external knowledge during the
development process.
Taking advantage from the design-driven innovation, to understand the starting point of the
partnership network IBM build around Watson, it is possible to find some interesting aspects of the
strategy in the concept of product meaning. The design-driven innovation believes that the
exploration of radically new meanings is a process of generative interpretation that leads to the co-
generation of new meanings which involves many different actors. In fact, the interpretations of the
meaning of a product occur through continuous interactions among firms, designers, users, and
several stakeholders, both inside and outside a corporation. The theory suggests the implementation
of a strategy and a process that leverage the rich and multifaceted network of a firm outsiders, looking
beyond customers to "interpreters", such as scientists, customers, suppliers, intermediaries, designers,
artists, who deeply understand and shape the markets they work in. Reconducting the concept of
meaning to the technology steering process, it is possible to say that the steering of a technology
happens through the identification of all the possible meanings it can embodied. During the divergent
phase, the company needs to look for the different technology meanings, steering the technology to
fit into a multitude of diverse applications. To identify valuable as well as unusual meanings,
companies should look for interpreters, and the best interpreters are those who has a different
perspective, those that can bring in the spotlight previously unseen variables. Those interpreters are
likely to be external players, and this is why IBM started to build its external partnerships network
and to invest in creating the Watson ecosystem, even before involving its own employees in the
innovation process. As Roberto Villa said during the interview: “It is necessary to create new sharing
mechanisms. Innovation grows at a faster pace through industries’ contamination. Creating
partnerships and collaborations with very different market actors allow to pool and exploit very
different competences”. Here below an image that show the different phases of the Watson internal
and external network evolution over the innovation process:
192
Figure 8.9 – Watson Network Evolution
193
Conclusions
9
194
9.1. Research Objective and Question
The objective of this research is to identify the specific dynamics underpinning the technology
steering concept, which stems from the Design Driven Innovation (DDI) literature. From a practical
point of view, the aim is to identify mechanisms and strategies companies can implement in order to
identify all the possible applications a new born technology can embed. To obtain the knowledge
needed to pursue the research objectives, being technology steering a relatively new topic to the
innovation literature, the thesis has been structured around a case study analysis. The selected case,
IBM cognitive system, Watson, presented the chance to analyse, in-depth, the pattern of steering
activities related to a specific technology type, the General Purpose Technologies.
Current literature lack of a detailed description of the steering process’ steps, as, since now, the
interest of researchers has been mainly revolved around the applications’ characteristics rather than
the process required to support their identification and development, the so called, integration phase.
Design Driven Innovation suggests that, to successfully identify radical innovations, companies
should focus on unveiling quiescent meaning. The theory is rooted on the idea that each product holds
a particular meaning for consumers and only revealing those quiescent meanings, a company can
seize the technology’s full value. From a methodological point of view, DDI postulates that, to
effectively identify the less obvious meanings that a new technology can support, managers should
combine research activities about product technologies with socio-cultural analysis, leveraging an
external network which allows a process of interpretation by actively bringing in new perspectives.
To contextualise the Watson case within the DDI stream, it is possible to refer to the image below:
Figure 9.16 – Watson Path to Discover Quiescent Meanings (adapted from Verganti, 2009)
195
IBM followed the technology screening path as the firm is trying to envision new possibilities derived
from a technological change, generated by Watson. The company is using its breakthrough
technology to challenge the status quo and to propose new challenging paradigms by commercialising
Watson powered applications.
Besides the consideration linked to the Design Driven Innovation theory, in the analysis of Watson it
is necessary to account also for the logics connected to the commercialisation of General Purpose
Technologies. In the latest literature papers (Gambardella and Giarratana, 2013), licensing is
presented as the preferred business model to commercialise General Purpose Technology. Thanks to
their nature, GPTs can easily find application in many different sectors, giving the chance to the
innovator, to obtain the market fragmentation needed for benefiting from licensing fees. The higher
the number of licensees, the higher the revenues. This successful strategy however, implies that the
company’s involvement stops after the development of the technology is completed. The way the
technology is adopted and integrated into marketable applications is totally in charge of the licensees.
This aspect presents strongly different characteristics with respect to the DDI theory, which sees the
involvement of the company throughout the entire innovation process.
Thanks to the analysis of the Watson case, it was possible to study the behaviour of a company
commercialising a General Purpose Technology, though the implementation of a technology steering
strategy, and to give an answer to the research question:
“How can companies steer a General Purpose Technology to integrate it into meaningful
application fields?”
The research suggests a series of managerial practices and activities to successfully drive the
innovation process, taking into consideration the different actors involved, and their respective roles.
196
9.2. Main Research Outcomes
Thanks to having reviewed the literature and completed the case study analysis phase (i.e. IBM
Watson), it is possible to propose a methodological approach to develop and integrate a technology
into meaningful applications. In fact, the case, marks an evident change in the General Purpose
Technology commercialisation strategy, bringing important implications to the current literature.
Since now, as already discussed in the literature chapter concerning General Purpose Technologies,
the preferred commercialisation strategy has been the licensing. However, the study of Watson paves
the way to a new possible strategy: the technology steering, which is defined as the process through
which a company can identify all the different applications a technology can enable. At a macro level,
it is possible to highlight differences regarding the phases of the commercialisation process and the
company’s involvement in these phases. As already said, for what concern licensing, the company
does not get involved into the integration of the technology to create commercially viable solutions.
Its involvement ends with the development. Instead during the steering, the company maintains the
ownership of the technology also during the commercialisation.
The different involvement level above presented opens up to a fundamental consequence, which is
one of the main differences between the two business models. Comparing to the technology licensing,
the alternative path of technology steering, that emerged through the case study, imply an extended
ownership of the technology. In fact, by following the technology evolution until it lands on the
Figure 9.2 – Different involvement of a company during technology licensing or steering Figure 9.2 – Company’s Involvement in the Technology Licensing and Steering Process
(adapted from Verganti, 2009)
197
market, until it is integrated into meaningful applications, it is possible for the company to maintain
a deep ownership of it, not only at a formal level but, by actually maintaining the knowledge about
its dynamics and functioning, about the use that is made of it in the market. It is a channel to
continuously discover the technology strengths and weaknesses. Moreover, thanks to the generality
of purpose, the integration can take place in very distant and unrelated markets, increasing the need
of continuously learning and supervising the technology evolution. The main consequence of keeping
ownership on the technology, is that the company has the possibility to understand how to improve it
and upgrade it in order to match growing and changing market demand. Different applications have
different objectives, different fields have different needs and by being involved in the
commercialisation, the company can quickly respond to every rising market request. In particular,
IBM has identified, as a suitable method to constantly update the technology, the use of APIs. By
dividing the entire technology in individual building blocks, it is possible to easily integrate and
enlarge its features.
Through a continuous control over the technology and a management that aims at improving it by
actively participating in the integration phase, the company is able to steer the technology, identifying
the different meanings it can embed, and turn them into a growing number of applications spanning
many different fields.
Figure 9.3 – Technology Steering
198
As it is shown in the above image, when a company pursues a technology steering strategy, with a
GPT, it has to undergo a divergent phase, from the market launch of the technology, Watson in this
case, that lead to the uncover of many possible applications. Through the case study, it was possible
to identify the mechanisms needed to obtain the divergent effect. A company that wants to steer its
GPT, should implement the following two set of activities, coherently with the life-cycle phase of the
technology in each application field:
3. Activities aimed at identifying new market opportunities (unveiling new meanings):
a. Research: mainly in the first phases of the commercialisation, or, in projects aiming
to upgrade and improve the technology;
b. Partnership: in the first phases of the commercialisation, in secure markets and to
consolidate the technology market presence;
c. University Program: during the exploration phase, after the first applications have
been launched, to involve fresh talents with the technology for a short-medium time
(months) in order to unveil development possibilities in complex fields;
d. Hackathons: during the exploration phase, after the first applications have been
launched, to crowdsource many ideas, mostly from non-traditional backgrounds, at a
low cost.
4. Support activities:
a. Sale strategy: differently from licensing which is extremely costly for the licensees
and, tough, limited to a small number of large, wealthy companies, technology steering
should search for sale strategies with a restrained cost, affordable by anyone. This
case, which has a focus on the digital technologies, obtains the needed sale strategy
characteristics through the use of a cloud platform. The strategy allows to target both
the B2B and the B2C markets, moreover, it is easily scalable;
b. Product improvement: technology steering requires a constant update and
modification of the basic technology to adapt it to the most disparate sectors, and
companies should find an easy and cheap method to obtain the required advancement.
The Watson case propose as a possible digital solution, to divide the technology in
individual building blocks, the APIs, which are easier to modify and integrate;
c. Promotion strategies: companies that want to steer a technology should find a way to
obtain a huge echo, spanning many different application fields, and going beyond the
traditional industries, in order to catch and unveil new technology meanings. The
Watson case, adopted world wide spread conferences to permeate many markets;
199
d. Customers’ education: when a completely new technology is developed, potential
customers should learn how to use it, a task that becomes particularly tricky when the
technology might be adopted in unrelated markets. For these reasons, a company
trying to steer a technology should provide easy access to support and learning
methods. The Watson case, offered potential users many information about the
technology and its functioning, through courses, webinars and personal assistants. The
openness typical of the steering process is in clear opposition with the secrecy of the
licensing model;
e. Internal transformation: the entire company should be following and sustaining the
technology during the steering process. The Watson case organised for this purpose,
Cognitive Build, a competition that brought the employees closer to the technology,
making them, using it personally to develop a potential application.
Through the careful management of these activities a company should be able to support the
commercialisation of its newly developed General Purpose Technology, by playing a central role
throughout the entire process. As already said concerning the Design Driven Innovation, in order to
identify meaningful applications, it is necessary to identify new meanings that the technology can
embed. On this purpose, it is particularly relevant to build a network of interpreters. By looking at the
activities IBM is implementing to discover marketable applications, it is evident the role they play in
building a network around the technology. Partners, start-uppers, students and other actors that might
be involved in the activities belong to very different background and they all have a different point
of view, or interpretation of the technology, that can help it grow and flourish.
As explained in the above list of activities to support the divergent phase, the diversion should be
supported by different kind of activities according to the different phases of the innovation process.
In fact, the diversion can be divided into two main phases, the explorative phase and the exploitative.
These two phases can be seen both at the process level, and at a single application field level. In fact,
in the early phases of the divergence, when there are many unknown markets to investigate, the
exploration character is stronger. The companies implement activities to understand what are the
possibilities and how to tackle the market, after the first approach has been made, the company can
focus on exploit the market, through the proliferation of technology applications. As shown by the
case, partnerships are those that better support the consolidation or exploitation of a market, while,
200
other activities, are useful to explore technology (research) and market (university program and
hackathon) possibilities.
In the early stage, the diversion phase could have been pursued both by developing many applications
in a single interesting field or by developing few applications in many different fields. A curious
aspect which emerged from the Watson case is the tendency of the company to approach many
different application fields instead of penetrating the most promising ones. At this state of the process
it is difficult to define if it is a winning choice, however, the reasons that seem to justify this behaviour
might be the search for interdisciplinary positive externalities, and ecosystem effects.
Interdisciplinary externalities emerge through mutual contamination of industries, and, the
consequent growth often relies on the companies’ ability to extract value from many different fields.
Ecosystem effects depends on the creation of a platform where many actors are connected, of a
network of companies to whom easily provide support and education, a network where companies
can help each other in the transition to the new technology through forum, a community that is easy
to identify for other potential members, as well as for customers. Even though the conjecture about
the strategy IBM is pursuing could be demonstrated only waiting for the process to advance, the
structure seems to mirror the notorious phases of the GPTs’ market disruption theorized by Helpman
and Trajtenberg in 1994: the time to sow, when the output and the productivity experience a negative
growth, and the time to reap, when enough complementary inputs have been developed and there is
a spell of growth and rising outputs. In a similar way, the strategy that IBM is developing, by
approaching many different application fields can be seen as the sow phase, with many resources
diverted and investments surpassing the revenues. If the parallelism is correct, in the near future it
should begin the reap phase, where IBM will be able to exploit the market through Watson, seeing
increasing returns and growth.
Thanks to the study of Watson, it was possible to adapt the main market and technology
characteristics to successfully implement technology licensing, which are, the market fragmentation,
the company’s size and the technology applicability, to the technology steering case. The result is a
list of five characteristics that are fundamental to successfully apply a technology steering strategy:
9. Market fragmentation: the higher the fragmentation (geography, industry, etc.) the higher the
licensor’s profit appropriability;
10. Technology accessibility: the more the technology is accessible, the higher the number of
users and consequently, the higher the profit;
11. Technology applicability: the wider the number of the technology applications, the higher the
profitability from licensing (overcome profit stifling because of spill overs).
201
12. Technology scalability: the technology ability to change its scale in order to meet growing
demand;
13. Technology adaptability: the technology ability to adapt in order to match specific client’s
needs, easily adjustable or enlargeable, by adding features;
9.3. Limits and Follow Up
Having discussed the conclusions and the outcomes of the analysis, and how they impact the current
Design Driven Innovation and General Purpose Technology literature, an analysis of the limitations
and of the further studies needed, is due.
The main limitation the research has suffered has been the scarcity of primary sources while
conducting the entire analysis. Even tough, through the interviews it was possible to delineate the
direction of the research that leads to extract some of the main research outcomes, a matter of
incompleteness might jeopardize the quality of the results, as, most of the assumptions, have been
based on secondary sources. If it would have been possible to have accurate data concerning the
number and typology of applications actually developed with Watson, the analysis would have
certainly resulted in more detailed conclusions. Another aspect that made the research, at the same
time, engaging and difficult to manage, was the fact that the Watson technology is still in the early
stages of its life, everything can still happen and change the course of the events. The single
methodologies IBM adopted to steer the General Purpose Technology, have born fruits for sure, and
they are valuable regardless the overall success of the technology, which, however, will be assessed
only in the future.
Besides the limitations related to the reliability of the results, the case study chosen strongly narrowed
the research objective, tying it tightly to the field of General Purpose Technologies. Even though
there are not preliminary reasons to suppose that the results should not be generalised to every other
type of technology, there are certainly many areas that still need to be investigated before this
hypothesis can be proven true.
In conclusion, this study would offer an alternative to licensing for companies dealing with General
Purpose Technologies. From the point of view of the Design Driven Innovation literature, the research
would be an initial investigation of the dynamics and methods companies should adopt while steering
a technology. Further researches, should have the objective of generalising the above results to any
type of technology.
202
203
204
Bibliography
Amer M., Daim T., Jetter A. (2012), A Review of Scenario Planning. Elsevier.
Andergassen R., Nardini F., Ricottilli M. (2017), Innovation Diffusion, General Purpose
Technologies and Economic Growth. Elsevier, Structural Change and Economic Dynamics, Vol. 40,
72-80.
Ardito L., Messeni Petruzzelli A., Albino V. (2016), Investigating the Antecedents of General
Purpose Technologies: A Patent Perspective in the Green Energy Field. Elsevier.
Basu S., Fernald J. (2006), Information and Communication Technology as a General Purpose
Technology: Evidence from U.S Industry Data. Federal Reserve Bank of San Francisco.
Baxter G., Sommerville I. (2011), Socio-Technical Systems: From Design Methods to Systems
Engineering. School of Computer Science, University of St. Andrews, UK.
Bekar C., Carlaw K., Lipsey R. (2016), General Purpose Technologies in Theory, Applications and
Controversy: A Review. Simon Fraser University.
Bjelland O., Chapman Wood R. (2008), An Inside View of IBM’s “Innovation Jam”. MIT Sloan
Management Review.
Bozeman B., Hardin J., Link A. (2008), Barriers to the Diffusion of Nanotechnology. Economics of
Innovation and New Technology.
Bresnahan T., Trajtenberg M. (1995), General Purpose Technologies “Engine of Growth”? Elsevier,
Journal of Econometrics 65.
Bruiyan, N. (2011). A framework for successful new product development. Journal of Industrial
Engineering and Management, 4 (4): 746–770.
Buganza T., Dell’Era C., Pellizzoni E., Trabucchi D., Verganti R. (2015), Unveiling the Potentialities
Provided by New Technologies: A Process to Pursue Technology Epiphanies in the Smartphone App
Industry. John Wiley & Sons Ltd., Creativity and Innovation Management, Vol. 24, 3.
Carlsson B., Stankiewiez R. (1991), On the Nature, Function and Composition of Technological
Systems. Journal of Evolutionary Economics, Vol.1, 93-118.
Carlsson B., Jacobsson S., Holmen M., Rickne A. (2002), Innovation Systems: Analytical and
Methodological Issues. Elsevier, Research Policy 31, 233-245.
205
Cecere G., Corrocher N., Gossart C., Ozman M. (2012), Technological Pervasiveness and Variety of
Innovators in Green ICT: A Patent-Based Analysis. Elsevier.
Chiesa V. (2001), R&D Strategy and Organisation: Managing Technical Change in Dynamic
Contexts. Imperial College Press.
Coates J. (2000), Scenario Planning. Elsevier, Technological Forecasting and Social Change 65, 115-
123.
Coats J., Farooque M., Klavans R., Lapid K., Linstone H., Pistorius C., Porter A. (2001), On the
Future of Technological Forecasting. Elsevier, Technological Forecasting and Social Change 67, 1-
17.
Corso M. (2002), From Product Development to Continuous Product Innovation: Mapping the
Routes of Corporate Knowledge. International Journal of Technology Management.
Crafts N. (2003), Steam as a General Purpose Technology: A Growth Accounting Perspective.
Department of Economic History, London School of Economics.
Cusumano M. (2010), Technology Strategy and Management. The Evolution of a Platform.
Communication of the ACM, Vol. 53.
Daim T., Sener N., Galluzzo C. (2009), Linking Technology and New Product Development. 42nd
International Conference on System Science.
Dell’Era C., Marchesi A., Verganti R. (2010), Mastering Technologies in Design-Driven Innovation.
Research-Technology Management.
De Smedt P., Borch K., Fuller T. (2012), Future Scenarios to Inspire Innovation. Elsevier,
Technological Forecasting & Social Change 80, 432-443.
Evans M., Jamal A., Foxall G. (2006), Consumer Behaviour. John Wiley & Sons.
Ferrucci D., Brown E., et. al. (2010), Building Watson: An Overview of the DeepQA Project. AI
Magazine.
Gambardella A., Giarratana M. (2011), General Technological Capabilities, Product Market
Fragmentation, and Markets for Technology. Elsevier.
Gambardella A., Mc Gahan A. (2010), Business-Model Innovation: General Purpose Technologies
and their Implications for Industry Structure. Elsevier.
Gawer A., Cusumano M. (2013), Industry Platform and Ecosystem Innovation. The Journal of
Product Innovation Management.
206
Geels F. (2004), From Sectoral Systems of Innovation to Socio-Technical Systems. Elsevier, Research
Policy 33, 897-920.
Geels F. (2005), Co-Evolution of Technology and Society: The Transition in Water Supply and
Personal Hygiene in the Netherlands (1850-1930)-a Case Study in Multi-Level Perspective. Elsevier,
Technology in Society 27, 363-397.
Geels F. (2007), Transformations of Large Technical Systems: A Multilevel Analysis of the Dutch
Highway System (1950-2000). Sage Publications, Science Technology & Human Value 32, 123-149.
Geels F., Kemp R. (2007), Dynamics in Socio-Technical Systems: Typology of Change Processes and
Contrasting Case Studies. Elsevier, Technology in Society 29, 441-455.
Godet M. (2000), The Art of Scenarios and Strategic Planning: Tools and Pitfalls. Elsevier,
Technological Forecasting and Social Change 65, 3-22.
Green K., Vergragt P. (2002), Towards Sustainable Households: A Methodology for Developing
Sustainable Technological and Social Innovations. Elsevier, Futures 34, 381-400.
Helpman E., Trajtenberg M. (1996), Diffusion of General Purpose Technologies. National Bureau of
Economic Research, Massachusetts.
Hwangbo H. (2014), Engaging Employees for an Innovation Advantage. Pwc, Next in Tech.
Iansiti M. (1995), Technology Development and Integration: An Empirical Study of the Interaction
Between Applied Science and Product Development. IEEE Transactions on Engineering
Management, Vol. 42.
Jovanovic B., Rousseau P. (2005) General Purpose Technologies. Elsevier, Handbook of Economic
Growth, Vol. 1B.
Kinni T. (2016), Is it Time to Build Your Own Platform? MIT Sloan Management Review.
Klein H. (2002), The Social Construction of Technology: Structural Consideration. MIT Press.
Koberg C., Detienne D., Heppard K. (2003), An Empirical Test of Environmental, Organisational
and Process Factors Affecting Incremental and Radical Innovation. Journal of High Technology
Management Research 14, 21-45.
Kokshagina O., Gillier T., Cogez P., La Masson P., Weil B. (2016), Using Innovation Contest to
Promote the Development of Generic Technologies. Elsevier.
Kootstra G. (2009), The Incorporation of Design Management in Today’s Business Practices. Centre
for Brand, Reputation and Design Management, Rotterdam The Netherlands.
207
Kostoff R., Schaller R. (2001), Science and Technology Roadmaps. IEEE Transactions on
Engineering Management, Vol. 48.
Krippendorff K. (1989), On the Essential Contexts of Artifacts or on the Proposition that “Design is
Making Sense (of Things)”. MIT, Design Issues, Vol. 5, 9-38.
Leifer R., Mc Dermott C., O’ Connor G. (2000), Radical Innovation: How Mature Companies Can
Outsmart Upstarts. Harvard Business Review.
Leonard-Barton D. (1988), Implementation as a Mutual Adaptation of Technology and Organization.
Harvard Graduate School of Business.
Magistretti S., Dell’Era C., Verganti R. (2017), Technology Steering: Managing Technology
Development to Unveil the Quiescent Meanings. Politecnico di Milano.
Markard J., Truffer B. (2008), Technological Innovation Systems and the Multi-Level Perspective:
Towards an Integrated Framework. Elsevier, Res Policy.
Martino J. (2002), A Review of Selected Recent Advances in Technological Forecasting. Elsevier,
Technological Forecasting and Social Change 70, 719-733.
Morillo M., Dell’Era C., Pisanelli P., Verganti R. (2015), Envisioning New Futures by Discovering
Quiescent Meanings in Technologies. Politecnico di Milano.
Moriwaki N., Akitomi T., Kudo F., Mine R. (2016) Achieving General Purpose AI that Can Learn
and Make Decisions for Itself. Hitachi Review, Vol. 65.
Morrison G.R., Ross S., Kemp J. (2004) Designing Effective Instruction. John Wiley & Sons.
Muegge S. (2013), Platforms, Communities, and Business Ecosystems: Lessons Learned about
Technology Entrepreneurship in an Interconnected World. The Journal of Technology Innovation
Management.
Mumford E. (2006), The Story of Socio-Technical Design: Reflections on its Successes, Failures and
Potential. Information System Journal 16, 317-342.
Norman D., Verganti R. (2013), Incremental and Radical Innovation: Design Research vs.
Technology and Meaning Change. MIT, Design Issues, Vol. 30.
Oudshoorn N., Pinch T. (2005), How User and Non-Users Matter. MIT Sloan Management Review.
Peltoniemi M., Vuori E. (2006), Business Ecosystem as the New Approach to Complex Adaptive
Business Environment. Tampere University of Technology.
208
Pisanelli P., Dell’Era C. (2015), Discovering Quiescent Meanings in Technologies: Exploring the
Design Management Practices that Support the Development of Technology Epiphany. Politecnico
di Milano.
Porter A. (2004), Technology Futures Analysis: Toward Integration of the Field and New Methods.
Elsevier, Technological Forecasting and Social Change 71, 287-303.
Power B. (2014), How Watson Changed IBM. Harvard Business Review.
Pyle D., San José C. (2015), An Executive’s Guide to Machine Learning. McKinsey Quarterly.
Qiu R., Catwell J. (2015) Revisit the Classification of General Purpose Technologies (GPTs) in
Corporate Innovation Research Using Patent and Patent Citation Data. Journal of International
Technology and Information Management.
Quist J., Vergragt P. (2006), Past and Future of Backcasting: The Shift to Stakeholder Participation
and a Proposal for a Methodological Framework. Elsevier, Futures 38, 1027-1045.
Ratcliffe J. (2005), Challenges for Corporate Foresight: towards Strategic Perspective through
Scenario Thinking. Dublin Institute of Technology.
Reeves M., Deimler M. (2011), Adaptability: The New Competitive Advantage. Harvard Business
Review.
Reeves M., Ueda D., Gerbert P., Dreischmeier R. (2016), The Integrated Strategy Machine: Using
AI to Create Advantage. Bcg.Perspectives.
Shea C. (2011), Nanotechnology as General Purpose Technology: Empirical Evidence and
Implications. Technology Analysis & Strategic Management, Vol. 23.
Simoudis E., Power B. (2016), The 5 Things IBM Needs to Do to Win at AI. Harvard Business Review.
Snow C., Fjeldstad D., Lettl C., Miles R. (2011), Organising Continuous Product Development and
Commercialisation: The Collaborative Community of Firms Model. Journal of Product Innovation
Management.
Thomke S. (1998), Managing Experimentation in the Design of New Products. Management Science
44(6): 743-762.
Thomke S., Von Hippel E., Franke R. (1998), Modes of Experimentation: An Innovation Process-
and Competitive-Variable. Elsevier.
Vecchiato R. (2012), Strategic Foresight and Environmental Uncertainty: A Research Agenda.
Foresight, Vol. 14.
209
Verganti R. (2003), Design as Brokering of Languages: The Role of Designers in the Innovation
Strategy of Italian Firms. Design Management Journal, Vol. 3, 34-42.
Verganti R. (2006), Innovating Through Design. Harvard Business Review, December, 114-122.
Verganti R. (2008), Design, Meanings and Radical Innovation: A Metamodel and a Research
Agenda. Journal of Product Innovation Management.
Verganti R. (2009), Design Driven Innovation – Changing the Rules of Competition by Radically
Innovating What Things Mean. Harvard Business Press.
Verganti R., Oberg A. (2012), Interpreting and Envisioning – A Hermeneutic Framework to Look at
Radical Innovation of Meanings. Elsevier, Industrial Marketing Management.
Youti J., Jacopetta M., Graham S. (2007), Assessing the Nature of Nanotechnology: Can We Uncover
an Emerging General Purpose Technology? Springer, Journal of Technological Transfer 33, 315-
329.
Zurlo F., Cagliano R., Simonelli G., Verganti R. (2002), Innovare con il Design: Il caso
dell’Illuminazione in Italia. Il sole 24 Ore.
210
211
Sitography
• IBM
o http://www.ibm.com
o http://www.ibm.it
o https://en.wikipedia.org/wiki/IBM
o http://apennings.com/how-it-came-to-rule-the-world/microsoft-and-the-ibm-pc-case-
study-the-deal-of-the-century/
• IBM Watson Activities
o Research
▪ http://research.ibm.com/
▪ www.ibmthinklab.com
▪ http://researcher.watson.ibm.com/researcher/view_group_people.php?grp=20
99
o Business Partner
▪ https://www-03.ibm.com/press/us/en/pressrelease/33726.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/35402.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/37029.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/40233.wss
▪ (https://www-03.ibm.com/press/us/en/pressrelease/40335.wss
▪ http://www-03.ibm.com/press/us/en/pressrelease/41122.wss
▪ https://www.ibm.com/blogs/think/2015/10/whats-next-for-cognitive-
computing-personalized-health-on-an-apple-watch/
▪ https://developer.ibm.com/iotplatform/2014/04/09/hello-world/
▪ http://www.genesys.com/about/newsroom/news/ibm-watson-and-genesys-
partner-to-power-smarter-customer-experiences
▪ http://www.rollingstone.com/features/ibm-alex-da-kid-talk-collaboration-
and-not-easy
▪ https://www-03.ibm.com/press/us/en/pressrelease/47184.wss
▪ https://www.ice.edu/press/press-releases/ibm-and-the-institute-of-culinary-
education-publis
▪ https://www-03.ibm.com/press/us/en/pressrelease/44697.wss
▪ http://www-03.ibm.com/press/it/it/pressrelease/45177.ws
▪ https://www-03.ibm.com/press/us/en/pressrelease/45278.wss
▪ https://techcrunch.com/2014/10/29/twitter-partners-with-ibm-to-bring-social-
data-to-the-enterprise/
▪ http://www.lifelearn.com/2014/10/06/ibm-watson-lifelearn-
sofie/?cm_mc_uid=17849747610514825005839&cm_mc_sid_50200000=15
07291135
▪ http://www.reflexisinc.com/ibm-reflexis-tap-power-watson-transform-retail/
▪ https://www-03.ibm.com/press/uk/en/pressrelease/45447.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/45424.wss
▪ https://developer.ibm.com/watson/blog/2015/06/10/disruptive-cognitive-
applications-developed-in-48-hours/
▪ https://www.ibm.com/blogs/watson/2016/07/omniearth-uses-ibm-watson-
combat-california-drought/
▪ https://www-03.ibm.com/press/us/en/pressrelease/46753.wss
212
▪ https://www-03.ibm.com/press/us/en/pressrelease/45861.wss
▪ https://www.youtube.com/watch?v=Ld5u8y1mgIc
▪ http://www.citigroup.com/citi/news/2015/150224b.htm
▪ https://www-03.ibm.com/press/us/en/pressrelease/46045.wss
▪ https://edtechmagazine.com/higher/article/2015/04/ibm-takes-watson-
university-competition-overseas
▪ https://blog.ibm.jobs/2015/03/31/watson-first-international-university-
competition/
▪ https://developer.ibm.com/watson/blog/2015/05/07/ibm-watson-hackathon-
winners-announced/
▪ https://www.ibm.com/communities/analytics/watson-analytics-blog/watson-
analytics-polar-opposites-use-case-insights-from-marine-animal-data/
▪ https://www-03.ibm.com/press/us/en/pressrelease/47632.wss
▪ http://hitconsultant.net/2015/11/12/welltok-rolls-out-watson-powered-
cafewell-platform/
▪ https://www-03.ibm.com/press/us/en/pressrelease/47957.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/48109.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/46753.wss
▪ http://www-03.ibm.com/press/us/en/pressrelease/48255.wss
▪ https://www.ibm.com/watson/advantage-reports/ai-social-good-social-
services.html
▪ https://www-03.ibm.com/press/us/en/pressrelease/48764.wss
▪ http://www.corriere.it/tecnologia/cyber-cultura/cards/giocattolo-che-cresce-
bambini-tutte-magie-watson-super-computer-ibm/ross-avvocato-
canadese.shtml
▪ http://qb3at953.com/media/coverage-startup/ibm-watson-adds-tiatros-patient-
centered-social-network-to-system/
▪ http://www-03.ibm.com/press/us/en/pressrelease/49355.wss
▪ http://www-03.ibm.com/press/it/it/pressrelease/49301.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/49307.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/49527.wss
▪ https://www.slideshare.net/dominopoint/ortocloud-lapplicazione-per-fare-
orto-su-bluemix
▪ http://www.telecomitalia.com/tit/it/innovazione/news-mondo-
innovazione/cognitive-computing-era.html
▪ http://ecc.ibm.com/case-study/us-en/ECCF-ASC12405USEN
▪ https://www.ibm.com/blogs/research/2016/06/artificial-intelligence-driven-
discovery-chemical-synthesis/
▪ https://www.linkedin.com/pulse/climate-change-environment-impacts-using-
ibm-watson-brown-pmp/
▪ https://developer.ibm.com/tv/hail-damage-insurance-analytics-with-drones-
ibm-watson/
▪ https://www.ibm.com/internet-of-things/partners/capgemini/
▪ https://www.ibm.com/blogs/think/2016/08/cognitive-movie-trailer/
▪ https://www.ibm.com/blogs/think/2016/08/cognitive-fashion/
▪ https://www.ibm.com/watson/stories/ejgallo-with-watson.html
▪ https://techpoint.ng/2016/09/08/cognihack-lagos-2016/
▪ https://www-03.ibm.com/press/us/en/pressrelease/50842.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/50688.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/50838.wss
213
▪ https://www.ibm.com/blogs/watson/2016/11/watson-developers-driving-
force-transforming-industries-society/
▪ https://www.computerworlduk.com/data/thomsons-creates-travel-inspiration-
chatbot-with-ibm-watson-3649635/
▪ http://fortune.com/2017/01/06/japan-artificial-intelligence-insurance-
company/
▪ http://www.4-traders.com/INTERNATIONAL-BUSINESS-MA-
4828/news/International-Business-Machines-Algar-Telecom-implements-
IBM-Watson-for-customer-service-23811521/
▪ https://rocketfuel.com/rocket-fuel-expands-partnership-with-ibm/
▪ https://thenextweb.com/artificial-intelligence/2017/06/29/ibm-watson-
supercomputer-ai-film-story/
▪ http://ecc.ibm.com/case-study/us-en/ECCF-WUC12574USEN
▪ https://www-03.ibm.com/press/us/en/pressrelease/52120.wss
▪ https://www.ibm.com/social-business/us-en/announce/ibm-cisco/
▪ https://techcrunch.com/2017/05/14/robowaiter-wants-to-make-american-
restaurants-great-again-with-robots/
▪ http://www-03.ibm.com/press/us/en/pressrelease/52405.wss
▪ http://www-03.ibm.com/press/us/en/pressrelease/52605.wss
▪ http://www-03.ibm.com/press/us/en/pressrelease/52603.wss
▪ https://www-03.ibm.com/press/us/en/pressrelease/52530.wss
▪ https://www.ibm.com/blogs/watson/2017/07/ibm-watsons-ai-is-powering-
wimbledon-highlights-analytics-and-a-fan-experiences/
▪ https://www.ibm.com/blogs/watson/2017/09/hiring-heroes-woodside-energy-
works-ibm-watson/
▪ https://mediacenter.ibm.com/media/DroTek+revolutionizes+the+agriculture+
industry+with+cognitive+solutions+from+IBM/0_fl3sxy5p
o University Program
▪ http://www.tgmcindia.com/
▪ https://www.nyu.edu/about/news-publications/news/2013/october/nyu-teams-
up-with-ibm-other-universities-to-advance-research-in-cognitive-
systems.html
▪ https://www.computerworlduk.com/it-vendors/imperial-college-london-
using-ibm-watson-predict-crime-3585241/
▪ http://www.counciloftextileandfashion.com/council-of-textile-fashion-
blog/2016/04/04/tfia-member-spotlight-the-academy-of-design-australia
▪ http://mashable.com/2017/06/07/ibm-science-for-social-
good/#OxhdcVnxLSqc
o Ecosystems
▪ http://www-03.ibm.com/press/us/en/pressrelease/43309.wss
▪ http://www-01.ibm.com/common/ssi/cgi-
bin/ssialias?subtype=ST&infotype=SA&htmlfid=WW912345USEN&attach
ment=WW912345USEN.PDF
▪ http://uk.businessinsider.com/iot-ecosystem-internet-of-things-forecasts-and-
business-opportunities-2016-2?r=US&IR=T
▪ http://www.forbes.com/sites/janakirammsv/2016/10/04/ibm-aims-to-bring-
cognitive-computing-closer-to-internet-of-things/#4f5d889276ed
▪ http://www-03.ibm.com/press/us/en/pressrelease/44057.wss
▪ https://cognitoys.com/
o Press Conference
214
▪ http://www.bizjournals.com/newyork/news/2015/05/05/ibm-world-of-
watson-conference-hackathon.html
▪ http://www.computerworld.com/article/3135852/artificial-intelligence/ibm-
in-5-years-watson-ai-will-be-behind-your-every-decision.html
▪ http://www-01.ibm.com/software/events/wow/registration/
o Challenges
▪ https://www.ibm.com/blogs/watson/2016/03/world-of-watson-2016-
hackathon-now-open/
▪ https://developer.ibm.com/watson/blog/2015/05/07/ibm-watson-hackathon-
winners-announced/
▪ https://ibmwatsonhackathon.devpost.com/submissions?cm_mc_uid=4294742
1067914765260948&cm_mc_sid_50200000=1477749089
▪ http://www.ibm.com/analytics/insight/
▪ https://www.dal.ca/news/2016/06/30/mba-team-makes-top-three-in-the-
world-in-ibm-competition.html
▪ https://www.ibm.com/blogs/watson/2016/05/3-characteristics-winning-
hackathon-applications/
▪ http://www.h2ohackathon.org/459-2/
▪ https://www.ibm.com/blogs/bluemix/2015/05/bluemix-hackathon-florida-
international-university/
▪ https://developer.ibm.com/tv/presentations-winners-industrial-iot-hackathon/
▪ https://www.ibm.com/blogs/watson/2016/10/building-18-powerful-bots-24-
hours-ibm-watson-hackathon/
▪ https://medium.com/cognitivebusiness/cognitive-build-hits-world-of-watson-
7bbc1b6a34ed
▪ https://www-03.ibm.com/press/uk/en/pressrelease/49971.wss
▪ https://news.bitcoin.com/hackathon-blockchain-energy-solutions/
▪ https://devpost.com/software/suicide-king
▪ https://www.ibm.com/blogs/bluemix/2017/02/announcing-winners-future-
finance-challenge/
▪ https://medium.com/@jeancarlbisson/inside-identifyai-winner-of-best-use-of-
ibm-watson-at-at-t-shape-hackathon-8e14373ffdb8
▪ https://www.ibm.com/blogs/watson-talent/2017/10/congratulations-2017-
candidate-experience-awards-winners/
o Watson Healthcare
▪ http://www-03.ibm.com/press/us/en/pressrelease/49132.wss
▪ http://fortune.com/ibm-watson-health-business-strategy/
▪ http://fortune.com/2016/02/18/ibm-truven-health-acquisition/
▪ https://www-03.ibm.com/press/us/en/pressrelease/51777.wss
o Education
▪ https://ibm.biz/IBMThinkAcademy
▪ http://watsondesign.guide/
▪ http://www.ibm.com/developerworks/?lnk=mdev_dw&lnk2=learn
▪ https://www.ibm.com/marketplace/learning-lab/us/en-us
▪ https://www-03.ibm.com/press/us/en/pressrelease/48443.wss
o Geographical Expansion
▪ http://fortune.com/2015/07/14/ibm-watson-home-middle-east/
▪ https://www.crunchbase.com/product/ibm-watson/timeline#/timeline/index
▪ https://www.ibm.com/think/marketing/how-watson-learns/
▪ http://www-03.ibm.com/press/us/en/pressrelease/46045.wss
215
▪ https://www.ibm.com/blogs/watson/2016/05/watson-learns-understand-
korean-life-language/
o Additional Initiatives
▪ https://www.wired.com/2015/06/hungry-let-supercomputer-chef-watson-tell-
cook/
▪ https://www.ibm.com/blogs/watson/2016/01/chef-watson-has-arrived-and-is-
ready-to-help-you-cook/
▪ http://www.businessinsider.com/sc/ibm-dress-met-gala-2016-5?IR=T
▪ http://www.ibm.com/watson/music/?cm_mmc=Earned-_-
9.1+MO+Mktg+Plan+Unknown_CA+Brand+Initiatives-_-WW_WW-_-
cognitive+music+vanity&cm_mmca1=000005IT&cm_mmca2=10002562&
▪ http://www.ibm.com/cognitive/uk-en/outthink/stories/cognitive-sports/
▪ http://ecc.ibm.com/case-study/us-en/ECCF-WWC12371USEN
▪ http://www.businessinsider.com/sc/watson-improves-us-open-fan-
experience-2016-9?IR=T
• IBM Future Challenges
o http://www.wsj.com/news/articles/SB10001424052702303754404579308981809586
194
o http://fortune.com/ibm-watson-health-business-strategy/
216
217
218
top related