This research note is restricted to the personal use of [email protected]This research note is restricted to the personal use of [email protected]G00264126 Hype Cycle for Emerging Technologies, 2014 Published: 28 July 2014 Analyst(s): Hung LeHong, Jackie Fenn, Rand Leeb-du Toit This Hype Cycle brings together the most significant technologies from across Gartner's research areas. It provides insight into emerging technologi es that have broad, cross-industry relevance, and are transformational and high-impact in potential. Table of Content s Analysis.......... ................................................................................................................................. .......3 What You Need to Know.................................................................................................................. 3 The Hype Cycle........ ........................................................................................................................ 4 New on the 2014 Hype Cycle for Emerging Tec hnologies...........................................................7 Major Changes.......... ................................................................................................................. 7 The Priority Matrix...... .................................................................................................................... ...9 Off the Hype Cycle........... .............................................................................................................. 10 On the Rise............ .................................................................................................................... .... 11 Bioacoustic Sensing............ ..................................................................................................... 11 Digital Security.......... ................................................................................................................12 Virtual Personal Assistants...... .................................................................................................. 14 Smart Workspace........ .............................................................................................................16 Connected Home............. ........................................................................................................ 17 Quantified Self............. ............................................................................................................. 19 Brain-Computer Interface......................................................................................................... 21 Human Augmentation......... ........................................................................................... ...........22 Quantum Computing........ ........................................................................................................ 24 Software-Defined Anything......... .............................................................................................. 26 Volumetric and Holographic Display s............. ...........................................................................28 3D Bioprinting Systems.......... ........................................................................................... ....... 30 Smart Robots........... ................................................................................................................ 32 Affective Computing....... .......................................................................................................... 33
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Figure 1. The Journey to Digital Business............................................................................................... 5
Figure 2. Hype Cycle for Emerging Technologies, 2014..........................................................................8
Figure 3. Priority Matrix for Emerging Technologies, 2014.................................................................... 10
Figure 4. Hype Cycle for Emerging Technologies, 2013........................................................................95
Analysis
What You Need to Know
This is the 20th anniversary of the Gartner Hype Cycle. The Emerging Technologies Hype Cycle was
the first Hype Cycle. It is now complemented by more than 120 Hype Cycles. As in other years, theHype Cycle for Emerging Technologies contains a representative set of technologies that get a lot
of interest from our clients, and technologies that Gartner feels are significant ones that should be
monitored. This Hype Cycle targets business strategists, chief innovation officers, R&D leaders,
entrepreneurs, global market developers and emerging technology teams by highlighting a set of
technologies that will have a broad-ranging impact across the enterprise. It is the broadest
aggregate Gartner Hype Cycle, selecting from the more than 2,000 technologies featured in
"Gartner's Hype Cycle Special Report for 2014." For information on interpreting and using Gartner's
Hype Cycles, see "Understanding Gartner's Hype Cycles."
Gartner recommends that enterprises do at least an annual scan of the technologies on this Hype
Cycle to question if each technology could lead to significant value to customers or the enterprise. As always, the scanning exercise should be extended to understand how others in your industry
may leverage these technologies. This year, we encourage enterprises to scan beyond the bounds
of their industry. One of the more prominent parts of a digital business strategy is the competitive
opportunity/ threat section that identifies how industry dynamics and competition may change
because of digital technologies. For example, the popularity of wearables is forcing convergence in
the areas of health, fitness and consumer electronics. What were traditionally sporting equipment
Gartner, Inc. | G00264126 Page 3 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Since the Hype Cycle for Emerging Technologies is purposely focused on more emerging
technologies, it mostly supports the last three of these stages: digital marketing, digital business
and autonomous. Let's take a look at each of three stages in detail, and the corresponding
technologies:
■ Digital Marketing (Stage 4): The digital marketing stage sees the emergence of the Nexus ofForces (mobile, social, cloud and information). Enterprises in this stage focus on new and more
sophisticated ways to reach consumers, who are more willing to participate in marketing efforts
to gain greater social connection, or product and service value. Buyers of product and services
have more brand influence than previously. They see their mobile devices and social networks
as preferred gateways and enterprises at this stage, and grapple with tapping into buyer
influence to grow their business. Enterprises that are seeking to reach this stage should
consider the following technologies on the Hype Cycle:
■ Software-defined anything, volumetric and holographic displays, neurobusiness, data
science, prescriptive analytics, complex-event processing, big data, in-memory DBMS,
■ Digital Business (Stage 5): Digital business is the first postnexus stage on the road map and
focuses on the convergence of people, business and things. The Internet of Things (IoT) and the
concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical
assets become digitalized and become equal actors in the business value chain alongside
already-digital entities such as systems and apps. 3D printing takes the digitalization of physical
items further and provides opportunities for disruptive change in the supply chain and
manufacturing. The ability to digitalize attributes of people (for example, the health vital signs) is
also part of this stage. Even currency (which is often thought of as digital already) can betransformed (for example, cryptocurrencies). Enterprises seeking to go past the Nexus of
Forces technologies to become a digital business should look to these additional technologies:
■ Bioacoustic sensing, digital security, smart workspace, connected home, 3D bioprinting
systems, affective computing, speech-to-speech translation, Internet of Things,
cryptocurrencies, wearable user interfaces, consumer 3D printing, machine-to-machine
communication services, mobile health monitoring, enterprise 3D printing, 3D scanners and
consumer telematics
■ Autonomous (Stage 6): Autonomous represents the final postnexus stage. This stage is
defined by an enterprise's ability to leverage technologies that provide humanlike or human-
replacing capabilities. Using autonomous vehicles to move people or products and usingcognitive systems to write texts or answer customer questions are all examples that mark the
autonomous stage. Enterprises seeking to reach this stage to gain competitiveness should
consider these technologies on the Hype Cycle:
■ Virtual personal assistants, human augmentation, brain-computer interface, quantum
computing, smart robots, biochips, smart advisors, autonomous vehicles, and natural-
language question and answering
Page 6 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
The Priority Matrix
This Hype Cycle has an above-average number of technologies with a benefit rating of
transformational or high. This is a deliberate goal of the selection process. We aim to highlight
technologies that are worth adopting early because of their potentially high impact. However, the
actual benefit often varies significantly across industries. Therefore, planners should ascertain whichopportunity relates closely to their organizational requirements:
■ Two to five years to mainstream adoption: These technologies are focused on the digital
marketing stage (think Nexus-related) areas such as cloud (cloud computing, hybrid cloud
computing) and information/analytics-related areas (in-memory DBMS and data science). The
only exception is enterprise 3D printing, which is a digital business stage technology.
■ Five to 10 years to mainstream adoption: Here, we find a mix of technologies that span all
three stages on the journey to become a digital business. However, with the exception of big
data and complex-event processing, most of the technologies are centered in the digital
business and autonomous stages (digital security, smart workspace, 3D bioprinting systems,autonomous vehicles, consumer 3D printing, Internet of Things, cryptocurrencies, machine-to-
machine communication services, smart advisors, virtual personal assistants).
■ More than 10 years to mainstream adoption: Human augmentation is the only technology
area that has been identified in this range. The cultural and ethical acceptance required for
employees, customers and citizens to augment themselves will cause this area to take many
years to reach mainstream adoption.
Gartner, Inc. | G00264126 Page 9 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Definition: Bioacoustic sensing captures natural acoustic conduction properties in the human bodyusing different sensing technologies. An example of this technology is Skinput, which allows the
skin to be used as a finger surface. When a finger taps on the skin, the impact creates acoustic
signals that are captured by a bioacoustic sensing device. Variations in bone density, size and the
different filtering effects created by soft tissues and joints create distinct acoustic locations of
signals, which are sensed, processed and classified by software.
Position and Adoption Speed Justification: This technology is being developed by researchers
from Microsoft and the Human-Computer Interaction Institute of Carnegie Mellon University in
Pittsburgh. In a prototype system, researchers focused on touch inputs on the arm and hand, and
created an armband device for sensing. They evaluated different input locations, such as the
fingertips and along the forearm.
The technology can also be integrated to augment the experience with a pico projector that projects
dynamic graphical interfaces onto the hand or forearm. For example, a telephone keypad can be
projected onto the palm of the hand, allowing real-time dialing without the use of a mobile phone.
Researchers have also developed a scrolling interface for projection onto the forearm. Users tap the
top or bottom of the UI to scroll up or down, or go back one level in the UI hierarchy. Users can
perform a simple pinching gesture with their thumb and fingers. Accuracy of 95.5% for five input
locations on the whole arm has been demonstrated.
The technology is in the early stages of development, and future efforts will need to improve on thenoninvasiveness of wearable bioacoustic sensor devices. Additionally, the disturbance from
acoustic signals coming from other motions of the body will need to be reduced, particularly in
walking or running scenarios (such as operating an MP3 player while jogging and using Skinput).
The input method is limited to quick skin taps, which in its current form does not permit more
elaborate common gestures like sliding or dragging. Additionally, body mass index fluctuations can
decrease sensing accuracy, and there is a high learning curve in setting up the solution.
Gartner, Inc. | G00264126 Page 11 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
services, embedded software and systems, operational technologies, and the Internet of Things
(IoT). Digital security technology is the convergence of information security, IT security, operational
technology (OT) security, IoT security and physical security technologies. It is the result of digital
impacts on security and risk organizations, and on process and technology architecture, and is the
next stage of enterprise security's evolution. Digital security's mission is to mitigate digital risk. The
evolving role of digital business in the enterprise places digital security at the post-Technology
Trigger phase of the Hype Cycle.
Cybersecurity awareness is growing with business leaders and is increasingly considered a required
part of new and existing business designs. Cybersecurity designs involve assets in the physical
world (OT and IoT) connected to new, nontraditional partners beyond the enterprise adding a level
of technology that creates peer-to-peer relationships among businesses, people and things. Digital
security aims to protect all assets in this new environment and ensures that relationships between
those assets can be trusted. Digital security expands present-day risk and security management
practices. It includes, but is not limited to, cybersecurity practices, and incorporates services from
outside of the business. Digital security is the means business leaders can use to leverage
cybersecurity to its full business advantage and helps extend security leaders' roles in becoming full
business partners.
User Advice: CIOs and enterprise architects should accelerate their efforts to become relevant in
organizations' business plans involving OT and IoT, and should align resources and processes to
foster integrated collaboration with security architecture, planning, management and operations.
Product managers should pursue new partners in security technologies and services to ensure that
business efforts to embrace OT and IoT assets will be accommodated. Strategic planners should
expand their knowledge and awareness of industrial automation and control, physical security, and
embedded system designs to accommodate long-term planning for digital security architecture.
Information security managers should establish organizational responsibilities for selected team
members to coordinate with OT counterparts and business managers to embrace the IoT in theirinitiatives. Those managers must reshape enterprise security practices to be more inclusive and
collaborative across business disciplines that include industrial, commercial and consumer
enterprises. Information security managers can ultimately transform themselves into digital security
managers as their responsibilities expand into the digital business.
Business Impact: Digital security will reshape information security, IT security, OT security,
physical security, and related security processes and organizations, and will allow security leaders
to better relate to business processes. This will occur in the following areas:
■ Business scenario planning: Digital security will now be part of the business initiative planning
cycle to counter the expanding complexity of multiple asset relationships, multiple partners andproviders, and technology combinations.
■ Restructuring due to merger, acquisition or divestiture: Digital security practices will offset the
inertia often experienced due to conflicting IT/OT/IoT requirements in different companies by
delivering a security architecture and design layer more adaptive to such changes. This will not
be an overnight realization, but will evolve as digital security maturity improves.
Gartner, Inc. | G00264126 Page 13 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Supply chain security: Digital security will be a means to coordinate and enforce security
practices across supply chain partnerships, including those that use cloud-based services to
deliver business solutions. The enforcement will be driven by specific business mandates
relative to trust with those relationships.
■ Security management and operations: Digital security teams will provide more direct, relevantdata to business teams involved in applications and services that use digital security
technologies and services as part of their business intelligence efforts. Cloud-based security
services will transform digital security practices by leveraging scale and capability in coverage.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Recommended Reading: "Agenda Overview for Digital Business, 2014"
"Digital Business Forever Changes How Risk and Security Deliver Value"
"Digital Business: 10 Ways Technology Will Disrupt Existing Business Practices"
Virtual Personal Assistants
Analysis By: Tom Austin; Brian Manusama; Kenneth F. Brant
Definition: A virtual personal assistant (VPA) performs some of the functions of a human personal
assistant. It observes its user's behavior, and builds and maintains data models, with which it drawsinferences about people, content and contexts. It does so to predict its user's behavior and needs,
build trust and, eventually, with permission, act autonomously on its user's behalf. It makes
everyday tasks easier (by prioritizing emails, for example) and its user generally more effective (by
highlighting the most important content and interactions).
Position and Adoption Speed Justification: VPAs represent a "perfect storm": a compelling
vision, a great leap forward in technology, plentiful supply, and significant demand driven by
transformational benefits.
Vision:
Apple's 1987 video "Knowledge Navigator" envisions a VPA.
The head of Microsoft's artificial intelligence (AI) research provides more recent examples in the
video "Making Friends With Artificial Intelligence: Eric Horvitz at TEDxAustin."
Technology:
There are new and better algorithms (such as deep neural nets), much better hardware, and large
bodies of information (big data) with which to train the systems underlying VPAs.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Supply:
There are already scores of VPA precursors, which lack one or more of the defining characteristics
of VPAs. Precursors include virtual assistants in customer service applications (such as Nuance's
Nina), conversational agents (such as Apple's Siri), and contextually aware proactive search
features (such as those emerging in Google Now).
Google's Gmail Priority Inbox, introduced in 2010, is a narrow-scope VPA that organizes the user's
email based on analysis of past behavior and content. Microsoft and IBM are expected to introduce
similar capabilities by the end of 2014. Microsoft's new Outlook feature is code-named "Clutter,"
while IBM's first VPA will appear in "Mail Next."
We predict that Google, Microsoft and IBM will introduce more fully featured, opt-in VPAs in their
cloud office systems in 2015 and 2016. At the Google I/O conference in 2013, Google outlined its
"Knowledge Graph" efforts. At its SharePoint Conference 2014, Microsoft described its "Office
Graph" and a client code-named "Oslo." Both look like strong precursors to more fully featured,
conversational, opt-in VPAs that are likely to appear in the medium term. IBM has yet to reveal itsplans, but we expect it have a lot to offer in the same time frame.
Venture capital investments in AI-related businesses are booming, and many startups are being
acquired very early on, leading us to believe that there will be no shortage of supply of VPAs (or
their subsystems and precursors).
Demand:
Initial demand for VPAs is likely to be driven by individual "bring your own" experiments, followed by
more serious investigations by enterprises into whether VPAs can deliver a transformative
advantage. Since the late 20th century, most progress in end-user-facing ad hoc tools has been
disappointing, due to a lack of compelling new user benefits. VPAs may be the first new technology
this century to present a real justification for investing ahead of everyone else (see " "The IT Role in
Helping High-Impact Performers Thrive").
This will not be a "winner takes all" segment. There will be many different VPAs for individuals and
enterprises to consider. Individuals may use several VPAs with different specializations, such as
health-related VPAs to help with diet, exercise, the quantified self, relationships and psychological
wellbeing; VPAs to serve as personal shoppers; personal-career development and financial-
management VPAs; and others for office-specific tasks like calendar management, email handling
and external information monitoring.
User Advice: IT leaders should:
■ Encourage experimentation, while creating opportunities for employees to share experiences
and recommendations. Lead by doing.
■ Prepare for mail-centered VPAs first, followed by a blossoming of the full range of capabilities
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Recognize that privacy, security and innovation are at odds. Watch cautiously while
encouraging experimentation. Imposing too many controls too soon due to a lack of trust in
your employees could eliminate the opportunity to outflank competitors. Equally, though,
granting your employees too much trust could be self-defeating, unless you keep careful watch.
■ Carefully measure the impact of VPAs on people's behavior and performance. Use an ever-evolving set of metrics, identified by observation and crowdsourcing.
Business Impact: VPAs have the potential to transform the nature of work and the structure of the
workplace. They could upset career structures and enhance workers' performance. But they have
challenges to overcome beyond simply moving from research labs to product portfolios. It is far too
early to determine whether, or how, they will overcome privacy concerns (although opt-in
requirements make sense). Individuals will think long and hard about what they want each VPA to
see and who else might view that information. Similarly, enterprises will be concerned about
employees exposing confidential information via VPAs.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Recommended Reading: "The IT Role in Helping High-Impact Performers Thrive"
"Cool Vendors in Smart Machines, 2014"
"Top 10 Strategic Technologies — The Rise of Smart Machines"
"The Disruptive Era of Smart Machines Is Upon Us"
"Market Insight: Virtual Assistants Will Make Cognizant Computing Functional and Simplify App
Usage"
Smart Workspace
Analysis By: Mike Gotta; Matthew W. Cain; Tom Austin
Definition: Smart workspace enables embedded programmability to the physical work environment
that surrounds employees, such as meeting rooms, cubicles, in-building open spaces, home officesor mobile settings, whether they are physically and/or virtually together. In the smart workplace,
"objects" (whiteboards, building interfaces, large digital displays, workstations, mobile devices,
wearable interfaces) participate in work activities via communications features that create a network
of "things," which contextually facilitate people's interactions.
Position and Adoption Speed Justification: The Internet of Things (IoT) has gained enormous
attention because of its potential to merge the physical with the digital, resulting in new business
models. There is growing interest in how the enterprise environment can exploit this physical/digital
Page 16 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: A connected home is networked to enable the interconnection of multiple devices,
services and apps, ranging from communications to entertainment, and healthcare to security.
These services and apps are delivered over multiple interlinked devices, providing a connected
experience for the household and enabling its inhabitants to control and monitor it remotely.
Position and Adoption Speed Justification: The connected home is a concept that overarchesseveral technologies, devices, applications, services and industries. As such, it is defined in this
technology profile to provide a framework for the Hype Cycle of the same name.
The connected home concept has been around for a while. It has evolved from the "smart home"
idea to a much more complex concept that expands, without being exhaustive, to:
■ Media entertainment
■ Home security
■ Monitoring and automation
■ Health and fitness
■ Education
■ Energy-management products and services
Until recently, aspects of the connected home such as home automation systems or wireless audio
systems were viewed as luxury household items. In the past 12 to 18 months, solutions at mass-
market prices have been introduced, placing the idea of the connected home closer to the average
household budget.
The connected home exists today mostly as silos of services and products, and underlying enabling
technologies that sometimes compete with each other. So far, few companies offer a managed,
integrated connected home experience, with the concept an increasingly complex one. There is
confusion with terms and overlapping between apps, services, devices and connection methods.
Yet the interconnection of home electronics and devices has been simplified enormously in the past
few years, with content and information being distributed throughout the home via a variety of
devices. This is largely the result of several things, including:
■ The maturity of access technologies (such as broadband, Wi-Fi and 4G)
■ The development and standardization of radio technologies, including low-energy networking
standards (such as Bluetooth LE, ZigBee and Z-Wave), which have allowed low-cost wireless
connectivity to be added to any device in the home■ The simplification of user interfaces
In recent months, the market has seen the introduction of several initiatives to create true
connected home ecosystems. Many of them are being driven by carrier service providers such as
AT&T, Deutsche Telekom and Telefonica; others by technology providers such as Technicolor,
iConnect and Insteon. Yet some of these solutions are focused more around home automation and
energy management than a full connected home solution. More recently, vendors such as
Page 18 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
aggregation, social facilitation, observational learning and individualized coaching. Many different
entities will provide these applications.
Position and Adoption Speed Justification: Analysis of this data allows individuals to gain a better
understanding of their experiences and improve their wellbeing. Integration with social media allows
users to connect with peers, share information, gain community support and learn from others. Thequantified self movement has become a catalyst for the socialization of new types of technology
and behavior. However, we now believe it will take five to 10 years before these are adopted by the
mainstream due to cultural concerns (surveillance), societal acceptance (etiquettes), and business
model fluctuations.
Although there are multiple types of applications, the most successful commercial implementations
can be found in sports, fitness and health. There are thousands of fitness and health-related apps in
smartphone app stores. Although application scenarios are broad, the dominant use case focuses
on motion trackers and vital-sign monitoring (blood pressure and heart rate). However, application
scenarios are expanding into areas such as mood monitoring and food/nutrition.
The breadth of devices itself is evolving rapidly as well. Many objects are being turned into sensor-
based devices, including helmets, sneakers, glasses, watches, clothing and jewelry. The popularity
of these devices and the immaturity of the technology can sometimes cause privacy, stability and
quality issues. Proliferation of devices and apps without standards-based interoperability has
created a market opportunity for new entrants to focus on data aggregation and normalization.
Quantified self is also beginning to move into the workplace. For example, the inclusion of wearable
devices and self-tracking apps as part of corporate wellness programs is becoming an aspect of
employee engagement and digital workplace initiatives. Strategists are also looking at the potential
of quantified self to improve personal and business productivity.
User Advice: The number and variety of personal devices and self-tracking mobile apps that collectdata and provide feedback to users is increasing. Many different entities such as device makers,
brands, software vendors, health-related firms, and developers of virtual personal assistants and
smart machines will provide these applications.
While a dedicated community of people are interested in quantified self as a life philosophy to
improve their own well-being, there are other populations interested in it to obtain medical insight or
improve more serious health conditions — for themselves or in their caregiver role.
Marketers, innovation teams and community strategies should examine quantified self to help
create a more social and collaborative brand experience, while leveraging personal analytics to
establish greater customer intimacy.
Business Impact: Business strategists should ensure that proper policies and controls are in place
to address user privacy concerns related to sharing personal data gathered via wearable devices,
sensors and mobile apps. Organizations also need to invest in community management processes,
and ensure that the personal participation needs and goals of community members are addressed.
As people connect with peers, build relationships and interact with each other through the use of
wearable devices, sensors and mobile apps, there may be a need for customized applications and
Page 20 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
unanticipated integration with other sites or internal systems. There are also behavioral, cultural and
societal factors that come into play that strategists need to address early during design activities.
As more people use mobile and social technologies to collect and assemble data about themselves
and their immediate surroundings, business opportunities emerge to apply insights gained from
personal analytics and community participation to improve brand/customer relationships andproduct/service innovation. Within the workplace, organizations can create quantified self-
incentives or requirements for employees to apply such analytics to measure performance or well-
being, or to track employees in hazardous environments for health and safety reasons.
Definition: A brain-computer interface is a type of user interface, whereby the user voluntarily
generates distinct brain patterns that are interpreted by the computer as commands to control an
application or device. The best results are achieved by implanting electrodes into the brain to pick
up signals. Noninvasive techniques are available commercially that use a cap or headband to detect
the signals through external electrodes.
Position and Adoption Speed Justification: Brain-computer interfaces remain at an embryonic
level of maturity, although we continue to advance them slightly along the Hype Cycle to
acknowledge the growing visibility of several game-oriented products (such as those from Emotiv
and NeuroSky) in the emerging field of neurogaming. The major challenge for this technology is
obtaining a sufficient number of distinctly different brain patterns to perform a range of commands
— typically, fewer than five patterns can be distinguished. However, this proves sufficient to play
interactive games and control equipment or even some vehicles. One approach that operates well
within these constraints is to watch for the distinctive brain pattern associated with recognizing a
desired goal — for example, brain-driven typing flashes letters on the screen until the desired letter
is recognized by the user's brain. Further advances are likely to arise from research on activatingprosthetic limbs, whereby functional magnetic resonance imaging (fMRI) and other brain-scanning
techniques are being used to identify people's natural brain patterns when performing various
actions (such as closing their hands). fMRI is also proving effective in reading emotions and
determining what type of object a person is looking at or thinking about. The Obama
administration's decade-long Brain Activity Map project will also drive improved interpretation of
brain signals. Several of the commercial systems also recognize facial expressions and eye
movements as additional input.
Gartner, Inc. | G00264126 Page 21 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Outside of medical uses, such as communication for people with "locked in" syndrome (a condition
in which a patient is aware and awake but cannot move or communicate verbally), other hands-free
approaches, such as speech recognition, gaze tracking or muscle-computer interfaces, offer faster
and more-flexible interaction than brain-computer interfaces. The need to wear a headband to
recognize the signals is also a serious limitation in most consumer or business contexts.
Researchers at Brown University have succeeded in reading brain signals from a low-power
wireless system implanted in animals for more than a year, paving the way for research on human
brain signal implants during the next decade.
User Advice: Treat brain-computer interfaces as a research activity. Some niche gaming and
disability-assistance use cases might become commercially viable for simple controls; however,
these will not have capabilities that will generate significant uses in the mainstream of business IT.
Business Impact: Most research is focused on providing severely disabled individuals with the
ability to control their surroundings. Commercialization is centered on novelty game interfaces and
applications that help users become more aware of their own brain state, and thus, they are better
able to relax or focus. As wearable technology becomes more commonplace, applications willbenefit from hybrid techniques that combine brain, gaze and muscle tracking to offer hands-free
interaction.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: The field of human augmentation focuses on creating cognitive and physical
improvements as an integral part of the human body. An example is using active control systems to
create limb prosthetics with characteristics that can exceed the highest natural human
performance.
Position and Adoption Speed Justification: Human augmentation moves the world of medicine,wearable devices and implants from techniques to restore normal levels of performance and health
(such as cochlear implants and eye laser surgery) to techniques that take people beyond levels of
human performance currently perceived as "normal." In the broadest sense, technology has long
offered the ability for superhuman performance — from night-vision glasses (or even a simple
flashlight) that help people see in the dark to a financial workstation that lets a trader make split-
second decisions about highly complex data.
Although most techniques and devices are developed to assist people with impaired function,
development of superhuman capabilities has started. Power-assisted exoskeletons provide
increased strength and endurance to soldiers and caregivers. Hearing aids, such as the GN
ReSound LiNX, offer their wearers superior hearing ability through wireless real-time adjustments ona mobile phone app; for example, these may be used to mute music and increase directional focus
in a noisy environment. Researchers are experimenting with creating additional senses for humans,
such as the ability to sense a magnetic field to develop the homing instinct of birds and marine
mammals; and with sensory substitution, such as allowing a blind person to drive a car by
translating visual information into vibrations. Brain stimulation techniques, such as transcranial
direct current stimulation, are proving effective in enhancing concentration and accuracy. To date,
these systems are worn or strapped onto the body, rather than surgically attached or implanted; but
with advances such as thought activation of mechanical limbs, the distinction between "native"
versus augmented capabilities will start to blur.
Increasing specialization and job competition are demanding levels of performance that will drivemore people to experiment with enhancing themselves. Augmentation that reliably delivers
moderately improved human capabilities will become a multibillion-dollar market during the next
quarter century. However, the radical nature of the trend will limit it to a small segment of the
population for most of that period. The rate of adoption will vary according to the means of
delivering the augmentation. Drugs are already used extensively for off-label performance
enhancement, such as anabolic steroids for strength and modafinil for alertness and concentration.
Wearable devices are likely to be adopted more rapidly than those involving surgery, although
individuals are already experimenting with implanting technology for purposes such as storage and
listening to music. The huge popularity of cosmetic surgery is an indicator that even surgery is not a
long-term barrier, given the right motivation.
Ethical controversies regarding human augmentation will emerge even before the technology
becomes commonplace. Several states have already passed bills banning employers from requiring
chip implants as a condition of employment. Future legislation will need to tackle topics such as
whether an employer is allowed to prefer a candidate with augmented capabilities over a "natural"
one. Longer term, the potential for genetic and epigenetic manipulation to improve desirable
characteristics will further inflame deep ethical divides.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Position and Adoption Speed Justification: A large number of technologies are being researched
to facilitate quantum computing. These include:
■ Lasers
■ Superconductivity■ Nuclear magnetic resonance (NMR)
■ Quantum dots
■ Trapped ions
No particular technology has found favor among a majority of researchers, supporting our position
that the topic remains in the relatively early research stage.
Hardware based on these technologies is unconventional, complex and leading-edge, yet most
researchers agree that hardware is not the core problem. Effective quantum computing will require
the development of algorithms (quantum algorithms) that will solve real-world problems whileoperating in the quantum state. The lack of these algorithms is a significant problem — although a
few have been developed. The output is typically in the form of a probability distribution, requiring
multiple runs to achieve a more accurate result.
One example is Grover's algorithm, designed for searching an unsorted database. Another is Shor's
algorithm, for integer factorization. Many of the research efforts in quantum computing use one of
these algorithms to demonstrate the effectiveness of their solution.
The first execution of Shor's algorithm was carried out in 2001 by IBM and Stanford University.
Since then, the focus has been on increasing the number of qubits available for computation. The
latest published achievement is a factorization of the number 21 at the University of Bristol in 2012.The technique used in that case was to reuse and recycle qubits during the computation process in
order to minimize the required number of qubits. The practical applications indicated by these
examples are clearly very limited in scope, and we expect this situation to continue through the next
10 years or more.
D-Wave Systems has demonstrated various configurations of quantum computers, based on
supercooled chips. These systems focus on the use of quantum techniques for a range of
optimization applications. The technique finds the mathematical minimum in a dataset very quickly.
Lockheed Martin, NASA and Google are making use of D-Wave's products and services for, among
other things, research on machine learning.
To date, D-Wave's demonstrations have involved superposition but have not demonstrated
entanglement in any significant way. Without quantum entanglement, D-Wave computers cannot
attack the major algorithms demonstrated by the smaller quantum computers that do achieve
entanglement.
Most of the research we observe in quantum computers relates to specialized and dedicated
applications. Given the focus and achievements of research in quantum computing, Gartner's view
Gartner, Inc. | G00264126 Page 25 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
is that general-purpose quantum computers will never be realized; they will instead be dedicated to
a narrow class of use. This suggests architectures where traditional computers offload specific
calculations to dedicated quantum acceleration engines. A lack of programing tools such as
compilers is another factor that is restricting the broader potential of the technology. Specific
applications include optimization, code breaking, image analysis and encryption.
The technology continues to attract significant funding, and a great deal of research is being carried
out. However, we have not seen any significant progress on the topic over the past year, although
publicity and hype have increased a little.
User Advice: If a quantum computer offering appears, check its usefulness across the range of
applications that you require. It will probably be dedicated to a specific application, and this may be
too narrow to justify a purchase. Check if access is offered as a service. D-Wave has now moved in
this direction, and it may be sufficient at least for occasional computing requirements. Some user
organizations may require internal computing resources, for security or other reasons. In these
cases, use of the computer on a service basis — at least initially — would offer a good foundation
on which to evaluate its capabilities.
Business Impact: Quantum computing could have a huge effect, especially in areas such as
optimization, code breaking, DNA and other forms of molecular modeling, large database access,
encryption, stress analysis for mechanical systems, pattern matching, image analysis and (possibly)
weather forecasting. "Big data" analytics is likely to be a primary driver over the next several years.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: D-Wave Systems; Delft University of Technology; IBM; Stanford University;
University of Bristol; University of Michigan; University of Southern California; Yale University
Software-Defined Anything
Analysis By: Philip Dawson
Definition: Software-defined anything (SDx) is a collective term that encapsulates the growing
market momentum for improved standards for infrastructure programmability and data center
interoperability driven by automation inherent to cloud computing, DevOps and fast infrastructure
provisioning. As a collective, SDx also incorporates various initiatives like OpenStack, OpenFlow,the Open Compute Project and Open Rack, which share similar visions.
Position and Adoption Speed Justification: The trend to use the terminology "software defined"
started with software-defined networking (SDN), which enables a separation of the networking logic
and policies into software, from the individual devices. Because SDN separates the hardware and
software, this potentially decouples the purchasing decision and may allow the adoption of generic
hardware, which would become very disruptive. As SDx matures, the scope to extend this concept
Page 26 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
to servers and storage will grow as well. While SDx is cloud like, SDx does not generally include
self-selection, metering and chargeback models.
Individual SDx terms range from embryonic to beyond the Peak, although the collective term is
emerging. SDx is achieved through the concept of an infrastructure policy framework and
interoperability through open APIs (although not necessarily standard APIs). Gartner takes thisconcept one step further in that the future of IT infrastructure will be model-based, with business
KPIs (such as throughput, uptime, response time, input/output per second, etc.) driving the
selection of infrastructure to meet the service needs, which in turn fosters repeatable engineering
and a direct connection between business requirements and infrastructure. The goal of SDx is to
abstract conventional, proprietary vendor hardware/software-specific implementations so that users
have less lock-in and more vendor choice over time. Most SD initiatives start life as vendor-led
strategies that encourage the creation of communities around proprietary interfaces, and true
interoperability only follows as the market commoditizes. Relative maturity and the speed of
evolution between different SDx definitions also vary widely.
SDx is seen by vendors as a way of abstracting infrastructure away from the software, managementand high availability (HA)/disaster recovery (DR) characteristics of a given workload. Across the
spectrum of SDx definitions, true standards and interoperability are weak, and mechanisms for
defining and policing standards are only slowly emerging. Many vendor differentiation claims focus
on basic infrastructure positioning or, at best, infrastructure and platform delivery. In order to
achieve full potential, SDx messaging that is aimed at transforming hardware deployment must
venture more aggressively into the application and software space. Some SDx definitions are more
naturally suited to workload transformation. OpenStack, for example, defines APIs and functionality
of the infrastructure and is supported by many vendors, thus delivering a standard interoperability
layer that can counter Amazon APIs. An additional benefit of new APIs is that new applications can
be written to bring new value at the automation layer. This will potentially create a whole new
industry segment.
Meanwhile, it is very easy for some vendors to blur the distinction between different SDx definitions,
or between an SDx definition and its "Open" stack counterpart. For instance, SDN and OpenFlow
are closely related initiatives to drive more open networking standards; they are not analogous to
each other. Similarly, OpenStack is an example of an SDx that will be supported by most relevant
vendors, but many of them will seek to create alternative SDx approaches to benefit their own
platform characteristics. Therefore, a vendor may be a nominal member of the OpenStack
community, but in reality prefer its own SDx to OpenStack (to monetize it or enable greater
technology lock-in). SDx definitions may also be thinly disguised variants of existing concepts; for
instance, most of the implementations of software-defined storage (SDS) today are really variants of
the storage resource management (SRM) concept that has been commonplace for a decade.
User Advice: SDx provides a way for vendors to leverage their installed base presence to drive
broader ecosystem acceptance from users and partners in their own domains. In doing so, there is
a danger that this defeats the purpose of greater substitutability and commoditization. Across
domains, standards are patchy (and some will take years to evolve), but SDx represents a powerful
set of trends that will become increasingly tangible over time — especially where they force vendor
collaboration that benefits user choice and heterogeneity. Over time, as SDx matures, the
Gartner, Inc. | G00264126 Page 27 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
transparent elements such as LEDs, while some higher-resolution displays use techniques such as
pulsed lasers that are directed by scanning mirrors to create balls of glowing plasma at the location
of each voxel.
Swept and static volumetric displays suffer from the significant dangers of rapidly moving parts or
ionized particles in the vicinity of people, especially because the volumetric nature of the generatedimage convinces the brain that it is solid and "real" and, therefore, can be touched. In all cases, the
volume of data required to generate a volumetric image is considerable — typically on the order of
1,000 times more to create a 24-bit voxel image (1,024 layers on the z-axis) than the corresponding
2D image. In all cases, the amount of CPU processing required is equally significant, compared with
creating a 2D image.
Holograms can be deployed as an alternative to a volumetric display, but with a more restricted
viewing angle. It should be noted that the term "holographic display" is frequently (but incorrectly)
applied to any image that creates an appearance of 3D. Some current theatrical and conferencing
displays allow realistic images to appear out of thin air and can, with care, allow individuals to walk
"around" them. However, they are simply 21st-century implementations of the 19th-centuryPepper's ghost illusion, using high-intensity projectors and Mylar display films, and not true
volumetric or holographic displays.
Several companies, including InnoVision Labs, Sony and Realfiction, have demonstrated 3D or
holographic images generated from their projectors, but not one of the images has been
commercialized yet.
Competing with volumetric and holographic displays, 3D displays such as those increasingly found
in televisions create a visual impression of depth, but rely on spatially multiplexed images that
deliver different views to each eye and allow the brain to reconstruct a 3D representation. They are
planar displays that simulate depth through visual effects, rather than true volumetric displays thatcreate an image in a display volume with real depth.
User Advice: Outside of specialized areas, where budgets are not significant constraints, this
technology remains firmly in the lab, rather than in commercial applications. Current technologies
limit the size of volumetric space that can be displayed, and the mechanical solutions create
potentially dangerous, rapidly moving parts. Until alternative approaches can be delivered (which
seems unlikely in the near future), volumetric displays will remain an extremely niche product.
Concurrently, the rapid growth and continuing development of 3D televisions in the mainstream
markets threaten to overwhelm the continuing development of volumetric and holographic displays
outside of specialized markets.
Business Impact: General applications are not well-developed for business use. To date, simple
applications in marketing have been deployed — usually targeted at high-end retail environments,
and there are some specialized applications for geospatial imaging to enhance 2D maps, and for
use in architectural rendering. However, most of these can be achieved at much lower costs using
other more-commercialized technologies, such as 3D displays. Potential application areas include
medical imaging, consumer entertainment and gaming, and design, but costs will need to fall
dramatically for these to be viable for using true volumetric displays.
Gartner, Inc. | G00264126 Page 29 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Benefit Rating: Low
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: HP; Musion; Realfiction; Sony
3D Bioprinting Systems
Analysis By: Vi Shaffer
Definition: 3D bioprinting systems produce tissue and "products" that function like human organs.
The process is directed by medical imaging data and software that specifies the design of living
tissue and organs, plus the printing device to create usable tissue or a functioning human organ
from an individual's own or other cells.
Position and Adoption Speed Justification: This technology profile was previously named 3D
Bioprinting. The change in 2014 to 3D Bioprinting Systems better reflects the nature of this profile.
3D bioprinting for medical patient application is an arena with very complex scientific and adoption
challenges to overcome, and very profound potential impact when they are conquered. We have
nudged this up a bit again, based on another year of tangible breakthroughs. However, it is still
early in the Hype Cycle, requiring substantial further R&D. We combine the tracking of "relatively
easier" tissue delivery for scientific R&D (which has a low barrier for adoption) and "very difficult"
organ generation for human transplantation in this profile. Because of progress toward
commercialization targeted to the pharmaceutical industry, we project Time to Plateau as five to 10
years, noting that development for human transplantation will cover an unpredictable course,
including regulatory and adoption hurdles on top of pure R&D. Players agree that the earlier
("easier") use cases will be coming in the areas of drug testing/screening, and tumor and wound
studies. Thus, research organizations and the pharmaceutical industry are the earliest beneficiaries.
Other early uses of 3D bioprinting have provided interesting life-saving anecdotes — such as a
custom stents for infants or personalized prosthetics. While important, these are not included as
bioprinting examples in this category.
While significant experimental and scientific informatics hurdles need to be overcome before broad
adoption, even within the earlier R&D/life science market, advances are coming steadily now, from
both academic research centers and commercial companies like Organovo (which calls itself a
"three-dimensional biology company").
So far, 2014 has seen a small amount of progress in this early-stage tissue and organ productioneffort for patient use. Important milestones in the past year include:
■ Wyss Institute for Biologically Inspired Engineering at Harvard University announced it had
successfully printed multiple types of cells and blood vessels, a combination it says is
necessary to create more complex tissue. The team addressed this problem by incorporating
blood vessels into a mix of living cells and extracellular matrix.
Page 30 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Organovo (still operating with slight revenue and substantial grant funding) has made a series of
announcements about progress in its tissue delivery business, such as that it initiated
contracting for toxicity testing using its 3D human liver tissue for select pharmaceutical
companies for preclinical drug discovery programs. (These are not functioning livers for
transplantation into humans, but are a sign of milestones met in the commercial arena.) The
company also announced new collaborative agreements with the U.S. National Institutes of
Health to develop eye tissues, integrate 3D bioprinting with traditional drug screening
technologies and develop more clinically predictive tissue models using its NovoGen MMX
Bioprinter.
■ A new competitor, Regenovo, exhibited internationally for the first time at the 2014 International
CES. The company was founded to commercialize technology developed out of Zhejiang
University of Science and Technology in Hangzhou, the capital of Zhejiang Province in eastern
China. The company announced it has printed an ear cartilage sample made from real tissue,
and has been dubbed "the Chinese Organovo."
■
Further fueling interest, in October 2013, the EuroStemCell organization, a collaboration of morethan 90 European stem cell and regenerative medicine research labs funded by the European
Commission's Seventh Framework Programme, held an event on "Opportunities and
Challenges in 3D Bioprinting" in Cambridge, England.
User Advice: Life science companies and academic medical centers that lead in the investigation of
such potential breakthroughs will be participating in, or closely following, approaches to tissue
engineering. Although this area falls more into the realm of major emerging technologies and life
science or biomedical developments, as opposed to "classic" healthcare IT, it illustrates the
continuing significance of IT's application to the transformation of medicine. Uses like this are still
far in the future.
HDO CIOs are getting closer both to the core, clinical processes of healthcare, and to biomedicaldevice and clinical engineering departments. Tracking technology advances such as this one
remind the CIO of the constant potential for dramatic medical innovations. Enabling technologies
like 3D bioprinting remind CIOs of the weighty changes in the landscape of medical technologies.
In addition, the detailed organ design, bioprinter device used, and organ production and placement
data will no doubt need to be incorporated into the EHR system of the future, and custom organs
would be one more type of computerized order set. This is yet another example of how the volume
and variety of data to incorporate into EHR systems and enterprise data warehouses will continue to
explode in years to come.
Business Impact: 3D bioprinting is one approach to solving a difficult dream for tissue engineers —to fulfill engineering designs and market demand for tissues, functioning human organs, arteries and
the like. This is one of the most dramatic examples of the potential breakthroughs that the future
fusion of medicine, engineering and IT may hold. The impact of successful commercialization on the
business of healthcare — and on its definition of services offered — will be profound, creating an
unprecedented demand for new, custom production services of replacement organs. It would
change the business fundamentals of currently lucrative transplant centers, and offer an intriguing
service line for medical tourism centers. Moreover, it would create new dilemmas with regard to
The University of Iowa; Wake Forest Institute for Regenerative Medicine
Recommended Reading: "Predicts 2014: 3D Printing at the Inflection Point"
"Technology Overview for Material Extrusion 3D Printing"
"Market Trends: 3D Printing, Worldwide, 2013"
Smart Robots
Analysis By: Kenneth F. Brant
Definition: Smart robots are literally smart machines that have a physical form factor — unlike
virtual personal assistants and smart advisors — and that can work autonomously in the physical
world and learn from their experiences. Smart robots sense conditions in their local environments,
recognize and solve basic problems, and learn how to improve. Some have a functional form, such
as warehouse robots from Amazon's Kiva subsidiary, while others have humanoid appearances,
such as Baxter from Rethink Robotics. They may work alongside humans or replace human labor.
Position and Adoption Speed Justification: While industrial robots have been around for a long
time and are certainly more advanced in their life cycles, the subset of smart robots is much newer
and has had significantly less adoption to-date. That is why smart robots are positioned at the
midpoint between the Technology Trigger and the Peak of Inflated Expectations. Hype and
expectations will continue to build around these smart robots over the next few years as a dynamic
set of large and small suppliers develops more solutions across the wide spectrum of generic and
industry-specific use cases. Several recent key events have expedited the adoption speed we now
expect to see in this category: (1) the acquisition of Kiva Systems by Amazon and Amazon's
subsequent plans to deploy 10,000 Kiva robots to fill customer orders by the end of 2014; (2)
Google's acquisition of Boston Dynamics and seven other robotics companies within a six-month
span in the second half of 2013 and its ability to incorporate machine learning in these acquiredrobot assets; (3) Rethink Robotics' launch of Baxter, which can work alongside human employees
to perform simple assembly line tasks by being shown what to do (rather than requiring
programming), at prices starting around $25,000; and (4) the transfer of military technology to
commercial and consumer robotics from companies like iRobot. These events will create a
competitive race on the supply side of the market to build scale in this category, now that we have
witnessed initial pilots and limited trials on the demand side of the market. Users, too, will race to
find competitive advantage after leaders in their industry segment have begun their journey with
smart robots.
Page 32 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
involved. It has to be inexpensively available for students because they use their personal devices
before education institutions can deploy affective computing software. However, products such as
Affectiva's Affdex or ThirdSight's EmoVision are promising because they enable relatively low-cost,
packaged access to affective computing functionality, even if these particular products are geared
toward testing media/advertising impact on consumers. Another industry, the automotive industry,
is more advanced. Here, the technology has not yet found its way into mainstream vehicle
production, but lightweight emotion detection — for example, being tired behind the wheel — is an
option in trucks on the market today. Addressing issues such as driver distraction and driving while
tired creates more awareness for mood sensing in a practical and ubiquitous product — the car.
The leading research lab in this field is MIT's Affective Computing Research Group, which has many
projects and is working on sensors, such as wristband electrodermal activity sensors connected by
Bluetooth to a smartphone, and software, such as the MIT Mood Meter, that assess the mood on
campus based on frequency of smiles as captured by ordinary webcams. Developments like these
can speed up the application of affective computing in education, but the road ahead still seems
long due to complexity. It is possible that there needs to be a breakthrough in a more consumer-
oriented area such as gaming before affective computing can be applied at a larger scale. One thing
that might jump-start implementation would be if facial recognition services for identification and
proctoring in online learning, from companies such as Smowl and KeyLemon, were implemented
more often and if affective computing were sold as an add-on to that kind of service. An interesting
and more specialized branch of affective computing involves robots such as the emote project. This
"artificial tutor" approach has many interesting possibilities. It uses a robot's movements to
strengthen affective feedback with the student, but it has the drawback of needing a physical robot.
The latter is likely to make this approach more costly for education institutions and delay
implementation.
Successful affective computing will most likely involve a complex architecture in order to combine
sensor input and provide an accurate response in real time. Mobile learning via cloud services andhandheld devices, such as smartphones and tablets, is likely to play a key role in the first few
generations, with a larger market penetration due to the relatively controlled ecosystem it provides
(high-capacity computing combined with a discrete device with many sensors). As content (for
example, textbooks) becomes more digitized and is consumed on devices that have several
additional sensors (for example, tablets with cameras and accelerometers), interesting opportunities
will arise to mash up the capabilities of, for example, Knewton's Adaptive Learning Platform and
ThirdSight's EmoVision, making affective computing for untutored learning more accessible. This
could potentially increase the number of data points available for statistically based adaptive
learning.
Altogether, this merits a position that is still in the trigger phase, with at least 10 years until itreaches the Plateau of Productivity.
User Advice: Most institutions should only continue to follow the research and development of
affective computing in education and other industries. However, in order to be prepared for the
strategic tipping point of implementation, institutions should start estimating the potential impact in
terms of possible pedagogical gains and financial impact, such as increased retention for online
learning. Institutions with a large online presence, or that want to exploit the hype for brand
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
precision in influencing attitudes, actions and behavior. For example, neurobusiness can offer
insight into perception, reasoning, reward responses and people's sense of belonging. Further
insights will be gained from advances in brain-scanning techniques, such as functional magnetic
resonance imaging (fMRI), which detects patterns of neural activity based on blood flow in various
regions of the brain. However, early enthusiasm will inevitably lead to exaggerated claims, and we
expect at least a decade's worth of research and experimentation are needed before neurobusiness
achieves its full potential.
User Advice: Larger companies with discretionary R&D budgets and a desire to lead their industries
should start experimenting with neurobusiness techniques. Consumer brands are most likely to be
first in engaging and realizing benefits. The purpose should be to test the saliency of the techniques
and gradually build knowledge and competency in applying them. Ideas for initial projects might
include a neuromarketing project for brand insights, redesigning the customer experience with Web
and call center interactions, management training on decision bias, or gamifying a corporate
innovation program. Business software intended to support any kind of decision could take
advantage by being designed to counterbalance human "irrationality" — for example, in risk
management. However, organizations must be aware of potential backlash from a "creepiness"
factor and concerns over privacy.
Business Impact: Neurobusiness has the potential to deliver a broad impact across industries and
across many areas of organizational activity. Specific areas of focus will be:
■ Marketing. Because marketing is all about engagement and influence, marketing professionals
have long been early adopters of psychology and behavioral science. They have also been the
first to adopt neuroscience lessons in a formal way in neurometric research, which aims to
understand customers' brain responses to marketing stimuli.
■
Customer Experience. Organizations with customer-facing opportunities are continually tryingto increase the number of "moments of delight" and the psychological addiction to a product or
service. Engagement techniques such as gamification and emotional design are increasingly
being applied to the customer experience, and the next frontier will be more direct and precise
application of the neuroscience of engagement.
■ Employee Performance and Decision Support. A productive area for neurobusiness is
enhancing employee creativity, productivity and decision making — for example, by addressing
challenges such as unconscious decision biases or adopting practices such as mindfulness
training. The insights can be effectively delivered as training and coaching or embedded into
software and website design. This will drive top executive and top professional personal and
team performance improvement.
■ Human Capital Management. Many human resources professionals in large organizations are
already deeply involved in tracking and applying social, behavioral and organizational
psychology and change management, and they are adding neuroscience to the list of relevant
research disciplines that inform their programs. Targets for behavioral change might include
innovation, creativity, ethical awareness or productivity.
Page 38 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
"Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter
All Day Long," David Rock, HarperCollins, 2009
"Thinking, Fast and Slow," Daniel Kahneman, Farrar, Straus and Giroux, 2012
"Predictably Irrational: The Hidden Forces That Shape Our Decisions," Dan Ariely, HarperCollins,
2010
Prescriptive Analytics
Analysis By: Lisa Kart; Alexander Linden
Definition: The term "prescriptive analytics" describes a set of analytical capabilities that specify a
preferred course of action. The most common examples are optimization methods such as linear
programming, decision analysis methods such as influence diagrams, and predictive analytics
working in combination with rules.
Position and Adoption Speed Justification: Although the concepts of optimization and decision
analytics have existed for decades, they are now re-emerging along with greater awareness of
advanced analytics and hype around big data. Decision management is a newer concept,
materializing over the last decade, that recognizes the value of using analytics (such as predictive
models) together with rules to make mainly operational decisions. Prescriptive analytics differs fromdescriptive, diagnostic, and predictive analytics in that its output is a decision. This recommended
decision can be delivered to a human in a decision support environment, or can be coded into a
system for decision automation.
Some use cases are very mature — such as optimization in supply chain and logistics, cross-
selling, database marketing and churn management — but many new use cases are emerging with
as yet unknown potential. Therefore, it is still early days for broad adoption and awareness.
Gartner, Inc. | G00264126 Page 39 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
increasingly, analytics and data science — degrees). The alternative is to work with an experienced
service provider that can help you avoid pitfalls, demonstrate some initial success and learn about
the process. For organizations heavily using predictive analytics, decision management solutions
such as those from FICO, IBM and SAS are an easy way to get started.
Business Impact: Prescriptive analytics has extremely wide applicability in business and society. It
can apply to strategic, tactical and operational decisions to reduce risk, maximize profits, minimizecosts, or more efficiently allocate resources. Significant business benefits are common, obtained by
improving the quality of decisions, reducing costs or risks, and increasing efficiency or profits. An
important part of the approach is geared to making trade-offs among multiple objectives and
constraints. A critical success factor is having senior business leaders closely involved such that
decisions about trade-offs are aligned with organizational objectives.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
statistics and statistical learning, signal processing and pattern recognition, operations research,
machine learning and decision science.
Position and Adoption Speed Justification: Data science is, to some extent, a replacement term
for data mining, but is also much more: data science is the unification of several quantitative
disciplines (statistics, machine learning, operations research, computational linguistics, and others)For the first time, computer scientists, operations researchers, statisticians and others are all willing
to unite behind the banner of "data science" — which is a very profound development.
During the past year, this notion of data science has become much better understood as the
quantitative set of skills and methodologies in the analytics range of capabilities (see "Extend Your
Portfolio of Analytics Capabilities"). Just the fact that many highly acknowledged academic
institutions now offer data science courses, and often even degrees, means that this term has been
generally accepted. In addition, organizations hiring data scientists and building data science teams
are on the rise. Gartner expects that, within a few years, the term "data science" will gain
widespread recognition as an umbrella term for many forms of sophisticated analytics.
User Advice: Organizations that want to increase the maturity of their analytics and extend their
portfolio of analytics capabilities need to develop data science skills to leverage new big data
sources and demonstrate business value using predictive and prescriptive (and often diagnostic)
capabilities. However, organizations must recognize that data scientists are in very short supply —
recruiting them internally may be difficult, but not impossible.
Business Impact: Data science drives a vast array of use cases across all industries: customer
relationship management, optimization and automation of diverse production processes, drug
research, quality and risk management, smart cities, smart systems and many more.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Recommended Reading: "An Eight-Question Decision Framework for Buying, Building and
Outsourcing Data Science Solutions"
"Extend Your Portfolio of Analytics Capabilities"
"Who's Who in Advanced Analytics"
"Magic Quadrant for Advanced Analytics Platforms"
"Organizational Principles for Placing Advanced Analytics and Data Science Teams"
Smart Advisors
Analysis By: Kenneth F. Brant
Gartner, Inc. | G00264126 Page 41 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: Smart advisors are a class of smart machines that deliver the best answers to users'
questions based on their analysis of large bodies of ingested content and knowledge of the users'
needs. Therefore, natural-language processing is essential for content and context matching.
Curating the right bodies of content (including real-time accessions), along with training and testing,
is necessary for smart advisors to excel at making probabilistic determinations. Smart advisors get
better at performing their role as they work with their users.
Position and Adoption Speed Justification: Although many current offerings are being beta-
tested while other offerings are still in early trials, smart advisors like IBM's Deep Blue and Watson
have had much well-publicized success in making the right determinations about chess moves and
quiz show answers. Therefore, this technology is positioned nearer the Peak of Inflated
Expectations than we would expect based on its limited adoption to-date. Tests and trials will
determine how quickly the next generation of technology will evolve to provide a wider swath of
user acceptance. Smart advisors are being developed both by megavendors like IBM and by
startups like Engage3 and Lumiata; so there is a small but interesting mix of players on the supply
side. A sign that this market is accelerating will be the appearance of several new entrants in 2014
and 2015 to create more competition and accelerate innovation and best practices in marketing and
deployment. Programs announced by IBM Watson Group to expand the ecosystem and licensing of
smart advisor technology to industry and role-based experts have the potential to expedite
adoption speed, but much remains to be seen with regard to how effective these programs will be.
Industry leaders in healthcare (Memorial Sloan Kettering Cancer Center and MD Anderson),
insurance (ANZ and USAA), and media (Nielsen) have already deployed smart advisors; Infosys'
recent partnership with IPsoft is a sign that the value proposition of smart advisors is gaining
traction in the IT services segment, as well.
User Advice: Enterprises in healthcare, retail, financial services and any other service industry with
relatively high labor costs, big, dynamic and largely unstructured datasets, and needs for highly
individualized consumer advice should explore smart advisors within the next 12 months andassess their readiness for pilot projects. Pilots should not only establish the capability of the
technology to reduce costs and improve service levels but also explore their ability to monetize
previously unexplored commercial avenues in big data. Therefore, design and review of feasibility
tests should include not only general managers of current business units (to vet possible cost and
productivity improvements in current operations) but also the chief data officers, the chief digital
officers and the visionaries of big data monetization and digital business architecture at your firm.
Business Impact: Smart advisors will impact the industries where the presence of big, dynamic
and largely unstructured data is compounded by the need for highly individualized
recommendations, like medical science findings at the intersection of personal medical records or
promotional offers at particular retail stores at the intersection of shoppers' needs. Theoretically thesmart advisor can advise across a spectrum of use cases, including enterprise users (such as
healthcare service providers), ancillary service providers (such as healthcare payers) or the end
consumers of products and services (such as patients). However, the economics of developing the
smart advisor plus channels and workable business models put the price and complexity of highly
personalized smart advisors out of the reach of the consumer end user today. This may change, but
first we expect enterprises will purchase smart advisors for their own use and for their clients' use.
For the enterprise deploying smart advisors, the business impacts can be lower costs and greater
Page 42 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
reliability (substituting for human labor), differentiated service and brand enhancement (often in
tandem with human labor) or a combination of both. By deploying smart machines, healthcare
providers and payers can potentially improve service outcomes and reduce waste in the healthcare
system by improving medical diagnoses and treatments; retailers and consumer goods
manufacturers can improve customer satisfaction and competitive positioning while optimizing
trade promotion expenses. IT services providers can potentially improve reliability of service, reduce
employee recruitment and training costs, and improve service outcomes.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: AlchemyAPI; Digital Reasoning; Engage3; Fluid; IBM Watson Group; IPsoft;
Lumiata
Autonomous Vehicles
Analysis By: Thilo Koslowski
Definition: An autonomous vehicle is one that can drive itself from a starting point to a
predetermined destination in "autopilot" mode using various in-vehicle technologies and sensors,
such as lasers, radars and cameras, as well as advanced driver assistance systems, software, map
data, GPS and wireless data communication.
Position and Adoption Speed Justification: Advancements in sensor, positioning, imaging,
guidance, artificial intelligence (AI), mapping and communications technologies, combined with
advanced software and cloud computing, are gaining in precision to bring the autonomous vehiclecloser to reality. However, complexity challenges remain before autonomous vehicles can achieve
the reliability levels needed for actual consumer use cases. The development of autonomous
vehicles largely depends on sensor and map data technologies. Sensor data needs high-speed data
buses and very high-performance computing processors to provide real-time route guidance,
navigation and obstacle detection and analysis. The introduction of self-driving vehicles will occur in
three major phases: from automated, to autonomous, to driverless vehicles. Each phase will require
more-sophisticated and reliable capabilities that rely less on human driving intervention.
First applications of autonomous vehicles will occur during this decade, and early examples might
be limited to specific road and driving scenarios (for example, only on highways and not in snow
conditions). During 2013, several automakers, including Nissan and Daimler, announced the plan tolaunch self-driving vehicle offerings by 2020. In addition to continued efforts by automotive
companies, continued efforts by technology companies (such as Google, Here [Nokia], QNX, Intel
and Nvidia) are helping to achieve critical advances in autonomous driving, and to educate
consumers on the benefits and maturity of the technology. Further, autonomous machine efforts in
other industries (including the defense and transportation sector, as well as law enforcement and
entertainment) are also accelerating progress in key technologies needed for self-driving vehicles,
such as AI.
Gartner, Inc. | G00264126 Page 43 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
During 2013, autonomous vehicles efforts have been prominently featured by mainstream media,
which is leading to unrealistic and inflated expectations. Key challenges for the realization of
autonomous vehicles continue to be centered on cost reductions for the technology, but they
increasingly include legal and ethical considerations, such as liability and driver-related aspects. For
example, can an intoxicated driver use an autonomous vehicle? Can children use a self-driving
vehicle? How should a self-driving vehicle behave when it has to decide between running over a pet
versus causing property damage? While legal requirements are beginning to be addressed on an
international level (for example, changes to the Vienna Convention on Road Traffic ), the pace of
technology innovations and individual country and state legislation will likely initially result in
specific, limited-use cases for self-driving vehicles.
User Advice: Automotive companies, service providers, governments and technology vendors (for
example, software, hardware, sensor, map data and network providers) should collaborate to share
the cost and complexity of experimentation with the required technologies, carefully balancing
accuracy objectives with user benefits.
Consumer education is critical to ensure that demand meets expectations once autonomousvehicle technology is ready for broad deployment. For example, drivers will need to be educated on
how to take over manually in case an autonomous vehicle disengages due to technical error or to
changing environmental conditions. Specific focus needs to be applied to the transitional phase of
implementing autonomous or partial-autonomous vehicles with an existing older fleet of nonenabled
vehicles. This will have implications for driver training, licensing and liability (as in, insurance).
Business Impact: Automotive and technology companies will be able to market autonomous
vehicles as having innovative driver assistance, safety and convenience features, as well as an
option to reduce vehicle fuel consumption and to improve traffic management. The interest of
nonautomotive companies highlights the opportunity to turn self-driving cars into mobile-computing
platforms that offer an ideal platform for the consumption and creation of digital content, includinglocation-based services and vehicle-centric information and communication technologies.
Autonomous vehicles are also a part of mobility innovations and new transportation services that
have the potential to disrupt established business models. For example, autonomous vehicles will
eventually lead to new offerings that highlight mobility-on-demand access over vehicle ownership,
by having driverless vehicles pick up occupants when needed. Societal benefits from reduced
accidents, injuries and fatalities and improved traffic management can be significant, and could
even slow down or potentially reverse other macroeconomic trends. For example, if people can be
productive while being driven in an autonomous vehicle, living near a city center to be close to work
won't be as critical, which could slow down the process of urbanization.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Recommended Reading: "Industry Convergence — The Digital Industrial Revolution"
"U.S. Government Must Clarify Its Terms to Boost V2V Technology Adoption"
"Predicts 2014: Automotive Companies' Technology Leadership Will Determine the Industry's
Future"
"German Consumer Vehicle ICT Study: Demand for In-Vehicle Technologies Continues to Evolve"
"Google Moves Autonomous Cars Closer to Reality"
Speech-to-Speech Translation
Analysis By: Adib Carl Ghubril
Definition: Speech-to-speech translation involves translating one spoken language into another. It
combines speech recognition, machine translation and text-to-speech technology.
Position and Adoption Speed Justification: Speech-to-speech translation entails three steps:
converting speech into text; translating text; and finally, converting text to speech. In effect,
anything that may be converted to text may be translated. Microsoft's Bing platform, running on
Windows 8.1, and Google Translate offer speech translation; furthermore, their optical character
recognition middleware allows the user to touch or select an on-screen character, from a graphic or
photo, and listen to the description or translation of that resulting text in any of the supported
languages.
While there has been little adoption of the technology by enterprises to date, due to accuracy
limitations and response times, the availability of low-cost mobile consumer products may drive
interest and progress for higher-end applications. We continue to anticipate rising hype andcapabilities during the next two years, and a growing breadth of applicability during the next five
years. In August 2013, Facebook purchased Mobile Technologies, makers of the speech-to-speech
application "Jibbigo," and this acquisition is a reflection of Facebook's ambition to enable online
interaction.
Vendors can build on their speech recognition know-how (such as what Apple has done with Siri),
to create a translation system that can be used to support dialogue. Meanwhile, platform-specific
applications from independent developers — like SayHi Translate — continue to expand the
breadth of users' options. Also, experiments are being conducted with a multimodal approach, in
which information from gestures and facial expression is being used to execute translation in
context with dialogue. IBM is tackling out-of-vocabulary words by devising a machine that interactswith the user to ascertain where the linguistic mistake was made.
User Advice: Do not view automated translation as a replacement for human translation but, rather,
see it as a way to deliver approximate translations for limited dialogues in which no human
translation capability is available. Evaluate whether low-cost consumer products can help during
business travel or first-responder situations. Leading-edge organizations can work with vendors and
labs to develop custom systems for constrained tasks.
Gartner, Inc. | G00264126 Page 45 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Definition: The Internet of Things (IoT) is the network of physical objects that contain embedded
technology to communicate and sense or interact with their internal states or the external
environment.
Position and Adoption Speed Justification: Enterprises vary widely in their progress with the IoT.
At a simple level, adoption can be classified into three categories. But even within an enterprise,
there can be groups that have different levels of progress with the IoT — therefore, the enterprise
would exhibit a combination of these categories:
■ Enterprises that already have connected things but want to explore moving to an IoT —
These enterprises are no strangers to the benefits and management of connected things/
assets. They are experienced in operational technology, which is an industrial/business internal
form of digital modernization. However, they are unfamiliar with the new Internet-based, big-
data-based, mobile-app-based world. They can be equally optimistic and hesitant to move theirassets (and to add new connected assets) to this unfamiliar Internet world.
■ Enterprises that are unfamiliar with the IoT, but are exploring and piloting use cases —
Most of these enterprises are focused on finding the best areas to implement the IoT while
trying to understand the technology.
■ Product manufacturers that are exploring connecting their products to provide new value
and functionality to their customers — It seems that every week there is a new story about a
Page 46 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
consumer or industrial product that is now connected. However, the large enterprises often wait
and see how the startups are doing before moving forward.
Standardization (data standards, wireless protocols, technologies) is still a challenge to more-rapid
adoption of the IoT. A wide number of consortiums, standards bodies, associations and
government/region policies around the globe are tackling the standards issues. Ironically, with somany entities each working on their own interests, we expect the lack of standards to remain a
problem over the next three to five years.
In contrast, dropping costs of technology, a larger selection of IoT-capable technology vendors and
the ease of experimenting continue to push trials, business cases and implementations forward.
Technology architecture for the IoT is evolving from one where the thing/asset contains most of the
computing resources and data storage to an architecture in which the thing/asset relies on the
cloud, smartphone or even the gateway for computing and connectivity capabilities. As the IoT
matures, we expect to see enterprises employ a variety of architectures to meet their needs.
User Advice: Enterprises should pursue these activities to increase their capabilities with the IoT:
■ CIOs and enterprise architects:
■ Work on aligning IT with OT resources, processes and people. Success in enterprise IoT is
founded in having these two areas work collaboratively.
■ Ensure that EA teams are ready to incorporate IoT opportunities and entities at all levels.
■ Look for standards in areas such as wireless protocols and data integration to make better
investments in hardware, software and middleware for the IoT.
■
Product managers:■ Consider having your major products Internet-enabled. Experiment and work out the
benefits to you and customers in having your products connected.
■ Start talking with your partners and seek out new partners to help your enterprise pursue
IoT opportunities.
■ Strategic planners and innovation leads for enterprises with innovation programs:
■ Experiment and look to other industries as sources for innovative uses of the IoT.
■ Information management:
■ Increase your knowledge and capabilities with big data. The IoT will produce two
challenges with information: volume and velocity. Knowing how to handle large volumes
and/or real-time data cost-effectively is a requirement for the IoT.
■ Information security managers:
Gartner, Inc. | G00264126 Page 47 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Assign one or more individuals on your security team to fully understand the magnitude of
how the IoT will need to be managed and controlled. Have them work with their OT
counterparts on security.
Business Impact: The IoT has very broad applications. However, most applications are rooted in
four usage scenarios. The IoT will improve enterprise processes, asset utilization, and products andservices in one of, or in a combination of, the following ways:
■ Manage — Connected things can be monitored and optimized. For example, sensors on an
asset can be optimized for maximum performance or for increased yield and up time.
■ Charge — Connected things can be monetized on a pay-per-use basis. For example,
automobiles can be charged for insurance based on mileage.
■ Operate — Connected things can be remotely operated, avoiding the need to go on-site. For
example, field assets such as valves and actuators can be controlled remotely.
■
Extend — Connected things can be extended with digital services such as content, upgradesand new functionality. For example, connected healthcare equipment can receive software
upgrades that improve functionality.
These four usage models will provide benefits in the enterprise and consumer markets.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Position and Adoption Speed Justification: The challenges in effective interpretation of idiomatic
interrogative speech, matching it to knowledge bases of potentially infinite scope, and the selection
of a limited number of answers (even just one) remain profoundly difficult. Simple answers such as
the one answer available for a trivia question are far easier than the multivariate, nuanced answers
inherent in real human communication (for example, "Cold or flu? Why not cold AND flu!").
IBM captured the attention of the world in February 2011 when Watson (a Smart Advisor) won a
quiz show, and now the technology is maturing into a variety of sophisticated products. It joins a
long line of immediately fascinating and broadly constrained custom-made knowledge-calculation
devices. This has been followed by the mainstream introduction of simple conversational assistants
from Apple, Google and, most recently, Microsoft. Apple's Siri launched in 2011 as a new way for
users to interact with informational systems. It incorporates speech-to-text technology with natural-
language processing query analysis to wow users (at least some of the time). In 2013, Google
featured such technologies in the keynote for its closely and avidly watched Google I/O. Such
attention defines a peak of hype. Facebook's Graph Search project also allows for semantically rich
queries, albeit with less ambiguity. In April 2014, Microsoft introduced Cortana: a virtual personal
assistant for the Windows Phone operating system. As precursor products, these conversational
assistants should evolve into virtual personal assistants between 2015 and 2017.
Solutions ultimately must discover means of communication with humans that are intuitive,
effective, swift and dialogic. They benefit significantly from context, either detected (as in
geographical location) or explicit (as with products that have a specific goal, such as health
diagnostics). The ability to conduct even a brief conversation, with context, antecedent
development and retention, and relevancy to individual users is in its infancy. However,
nonconversational, information-centered answers are indeed already possible with the right
combination of hardware and software. Also, as in all technology categories, the availability of such
resources can only become cheaper and easier. More than five years will pass before such
capabilities are commonplace in industry, government or any other organizational environment —but they will ultimately be available to leaders in such categories.
User Advice: The computing power required to accomplish a genuinely effective trivia competitor is
expensive, but will become less so with time. Any projects founded on such facility must be
experimental, but in the foreseeable future will include diagnostic applications of many kinds as well
as commercial advice and merchandising, and strategic or tactical decision support.
"Augmentation" of human activity and decision making is the key thought. No decision support
application comes, fully formed, from nothing — it will be expert humans who build it, design the
parameters and develop the interface. Humans will, similarly, evaluate its advice and decide how to
proceed. A good idea is to begin with experimental technologies, such as chatbots, and to work
toward more sophisticated technologies as they become commercially accessible.
NLQA is positioned to be a strong enabler for other technologies such as virtual personal assistants,
cognitive computing and speech recognition. These technologies can serve as two-way
steppingstones toward building an effective NLQA system.
Business Impact: Ultimately, the ability for line workers or unschooled consumers to achieve
effective responses from machines without using expertise in framing queries will generate new
Gartner, Inc. | G00264126 Page 49 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Recommended Reading: "The Nexus of Forces Is Driving the Adoption of Semantic Technologies,
but What Does That Mean?"
"Siri and Watson Will Drive Desire for Deeper and Smarter Search"
"CIO Advisory: Why CIOs Should Be Concerned About Siri and Other Voice-Controlled Assistants"
"Sherpa: The End of Search as You Know It for CRM"
Wearable User Interfaces
Analysis By: Angela McIntyre
Definition: Wearable user interfaces describe the interaction between humans and computingthrough electronics designed to be worn on the body. They may sense the human body or the
environment around the wearer and transmit the information to a smartphone or to the cloud.
Ideally, wearable user interfaces are unobtrusive, always on, wirelessly connected and provide
timely information in context. Examples of wearable electronics are smart watches, smart glasses,
smart clothing, fitness monitor wristbands, sensors on the skin and audio headsets.
Position and Adoption Speed Justification: This past year saw the hype on wearable user
interfaces reach the Peak of Inflated Expectations and become tempered with realism about the
value consumers perceive in new wearable devices. Devices are not viewed as stylish by
consumers; the data collected from wearable sensors yields insights that are marginally useful to
wearers. Apps and algorithms that can interpret noisy data from wearable sensors are needed to
increase the usefulness of recommendations. Use cases in which wearable user interfaces are more
convenient than smartphones are limited. Smartphones are gaining sensors and apps that give
them health and fitness tracking capabilities similar to wearables. Nonetheless, apps on
smartphones are enabling new capabilities and insights from wearables.
Yet interest in wearable interfaces remains high, and Google is fostering an ecosystem that is
expected to gain traction. Android Wear will enable a consistent user interface across different
types of wearables. For example, Android-based wearables will have a similar user experience in
Page 50 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
the layout of glanceable displays, gestures for navigating among apps, and using voice commands
for control and to access information. Google and affiliated service providers have potential access
to personal information gathered by other wearable devices using services on an Android Wear
platform.
Data from wearable devices can be combined with data from other devices in the Internet of Thingsand from other sources on the Internet, adding to big data. Apps, services and virtual personal
assistants (VPAs) will provide increasingly useful insights to wearers as part of cognizant computing
by using personal data collected from wearable electronics. The consumer trends for adoption are
being driven by quantified self, convenience and the desire for immediate alerts, especially
regarding social networks. Tracking data for medical reasons will be a longer-term driver for
wearables.
Over the next 10 years, wearable user interfaces will enable services to become more personalized
to the preferences and needs of the user through contextual information and bio-data gathered
through wearable electronics. Similarly, wearable devices will serve as controllers for other devices
in the Internet of Things. For example, consumers with Nest thermostats can control them remotelythrough Google Glass. Similarly, the Pebble smartwatch can take a photo with the GoPro camera
and start a car engine remotely. These are early examples of how wearable user interfaces will
become increasingly integrated into daily life.
User Advice: Invest now in deployments or pilots for wearable user interfaces in the enterprise.
Start with wearables for mobile workers who cannot conveniently or safely put aside what they have
in their hands to use a phone or tablet, such as employees using tools or equipment, or who need
to keep their heads up or to hold on for safety.
Engage with software developers now on augmented reality use cases specific to your business
needs. Augmented reality solutions are in development for head-mounted displays (HMDs).However, robust software solutions using augmented reality beyond checklists will take an
additional two to five years of development. The battery life of present HMDs lasts only a couple of
hours for uses such as streaming video. Until at least full-day battery life is available, workers will
find wearables inconvenient or impractical to use.
Where time-motion efficiency is essential to productivity, such as in call centers and logistics
organizations, employers are investigating wearables, such as gaze tracking through audio
headsets and location tracking through badges. Explore solutions that lead to recommendations to
increase worker productivity or to monitor employees in physically demanding work environments.
Encourage the workforce to be healthier by implementing wellness programs that include wearable
fitness trackers and also work with providers on advances in algorithms for fitness trackers. Fitness
trackers in wristbands or other forms are motivating to people who want to be less sedentary. The
general health of the consumer or employee can be measured with wearables, including body
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
policies around personal privacy and the restrictions around taking pictures in the workplace. Data
security risk will also increase with the rise in content sharing among devices that are interacting
across personal networks.
Business Impact: Early industries to adopt wearable electronics are aerospace and police,
followed by sports, field service, manufacturing, logistics, transportation, oil and gas, retail, andhealthcare. The healthcare market stands to benefit from wearable user interfaces that enable
mobile health monitoring, especially for heart conditions. Wearable cameras are ready for
deployment now for use cases such as police/security and inspections. Field service and
manufacturing are using streaming video to an expert who sees what the wearer sees, which is
useful for training or expert assistance. Sports is using wearables on players for "in the game"
perspective tracking the performance of athletes. Augmented reality solutions on HMDs have the
promise to increase productivity by providing part labels, checklists, maps and instructions
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
research or providing OEM capabilities to third parties. Indeed, Xerox has been an original
equipment supplier for 17 years but still does not put its name on a 3D printer.
Gartner has predicted that by 2015, seven of the 50 largest multinational retailers will sell 3D
printers through their physical and online stores. Most of these devices will be in the "consumer"
market although many will no doubt be purchased by enterprises.
For many consumers, however, do-it-yourself kits to build a 3D printer costing a few hundred
dollars are too much trouble, while assembled 3D printers costing up to $2,500 are too expensive,
although that may not be the case for many "makers," a term popularized by Wired magazine's
former editor-in-chief, Chris Anderson. More than hobbyists or early adopters, makers are
enthusiasts and entrepreneurs who create and collaborate on screen, hone their ideas online and in
real-world communities, and employ cutting-edge technologies to produce unexpected results.
What sets makers apart from other consumers is their inquisitive, collaborative approach to problem
solving, coupled with access to hardware and software tools unavailable to earlier generations of
tinkers and inventors.
However, for the general consumer population, which inevitably compares the cost of a 3D printer
to the $100 to $250 they may spend on a 2D printer, the price is too rich. A wide range of
rudimentary 3D printers that extrude plastic material is on the market, some with price points in the
$500 range. Even these may be too expensive for general consumer use, especially when the cost
to license or purchase 3D creation software tools is factored into the purchase. As a result, the
dominant near-term consumer use of 3D printing will be the purchase of items whether made by an
artist, sold by a consumer goods company or available through an online service bureau.
User Advice: While continuing to keep abreast of 3D print technology developments, physical and
online retailers must explore the use of this technology by experimenting with low-volume
manufacturing of high-margin, custom-designed pieces — for example, fashion jewelry andeyeglass frames — sold through in-store kiosks and Web portals. Market studies must determine
the materials that consumers prefer and the price points they are willing to meet. Retailers also
must test selling 3D printers in their physical and online stores. Successful consumer use of 3D
printers requires an ecosystem — software, materials and printer — that is more complex than that
associated with 2D printing on paper. Retailers need to think carefully about how they will support
this technology with the customer service basics — essential things that a customer expects when
shopping with the retailer, such as stock availability, warranties and postsale service and support.
Business Impact: Consumer 3D printing is a classic example of how the use of an established
technology, in this case, additive manufacturing, transitions over time from one that is prohibitively
expensive for all but manufacturing organizations, to one that has pricing within the grasp ofconsumers. The hype in the general press has heightened consumer awareness of the technology.
The news about manufacturing guns and other weapons with 3D printers complicates the
opportunity. And, yes, 3D printer costs continue to come down as the printers become more readily
available, enabling consumers to manufacture their own custom-designed items, ranging from
jewelry to weaponry. Retailers selling 3D printers or items produced with 3D printers must
investigate the legal implications of customers using devices sold by them to manufacture
Gartner, Inc. | G00264126 Page 53 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
potentially lethal weapons, just as they must take steps to ensure that 3D-printed items made per
their customers' orders comply with local copyright and related laws.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: 3D Systems; Afinia; Beijing TierTime; MagicFirm; Stratasys; XYZprinting
Recommended Reading: "Cool Vendors in 3D Printing, 2013"
"How 3D Printing Disrupts Business and Creates New Opportunities"
"Use the Gartner Business Model Framework to Determine the Impact of 3D Printing"
"3D Photo Booth Will Help Drive Awareness and Momentum for 3D Printing"
Cryptocurrencies
Analysis By: David Furlonger
Definition: Cryptocurrencies are virtual money, created by private entities without the backing of
governments, transacted using digital mediums. Unlike virtual money in the form of coupons or
tokens, they are generated via computer-originated cryptographic mechanisms, often in the form of
puzzle solving, that securely remit digital information through the use of private and public keys.
Mathematical regimes usually limit and control currency production or issuance.
Position and Adoption Speed Justification: Cryptocurrencies have risen to notoriety through the
publicity surrounding Bitcoin. Bitcoins are created via a process of mining — the solving of a
computation puzzle. Coins are awarded for each puzzle solution, and prior Bitcoin transactions are
simultaneously verified. The somewhat opaque nature of its original founding and organization,
together with huge volatility in its value, has prompted significant research written about its use as
an alternative currency. However, it is not the only such currency to be created. Others include
Litecoin, Peercoin (also referred to as PPCoin), Freicoin, Terracoin, Devcoin and Namecoin,
although none have as much circulation. (A recent list can be found at https://en.bitcoin.it/wiki/
List_of_alternative_cryptocurrencies.)
Although cryptocurrencies are potentially very secure (transactions can be readily verified but very
difficult to defraud), they also carry significant issues that undermine the principles of money as astore of value, unit of account and medium of exchange:
■ Questionable governance and transparency surrounding the original author — There is a public
log of transactions, but the computer addresses don't identify individuals, only the processor of
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Discuss with regulators their supervision and monitoring of cryptocurrencies as part of the
global financial marketplace and the impact on business operations.
■ Assess employee use of these currencies to protect against operational risk in the event of
unintended compliance problems.
■ Plan (technology and business road maps) for the potential integration of cryptocurrencies with
mainstream mediums of exchange — for example, review changes to data systems, execution
and risk systems (see "Bitcoin Now on Bloomberg" ).
Business Impact: It is easy to miss the most important point with all the "noise" surrounding
individual currencies. The most critical issue is that these mediums of exchange hold the potential
to enable a pure person-to-person or entity-to-entity medium to transfer any kind of digital value,
less expensively and faster than traditional mechanisms. Moreover, these currencies are not subject
to the control of a single country or jurisdiction. And they afford users a degree of anonymity and
security, including the lack of transaction repudiation.
Therefore, the currency itself is not as relevant as the mechanisms in which these currencies arebased — and that has fundamental ramifications for governments, financial institutions and the
world as a whole.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Recommended Reading: "Future of Money: Financial Inclusion Requires More Than Traditional
Money"
"Future of Money: Using Cloud Capacity as a Currency"
"The Nexus of Forces Is Reshaping the Future of Money"
"Future of Money: Virtual Money Drives Rapid Growth in Online Gaming Industry"
"Future of Money: Virtual Currency and Gamification Drive Financial Literacy and Revenue
Opportunities"
Complex-Event Processing
Analysis By: W. Roy Schulte; Nick Heudecker; Zarko Sumic
Definition: Complex-event processing (CEP), sometimes called event stream processing, is a
computing technique in which incoming data about what is happening (event data) is processed as
it arrives to generate higher-level, more-useful, summary information (complex events). Complex
events represent patterns in the data, and may signify threats or opportunities that require a
response from the business. One complex event may be the result of calculations performed on a
few or on millions of base events (input) from one or more event sources.
Fraud detection in banking and credit card processing depends on correlating events across
channels and accounts, and this must be carried out in real time to prevent losses before they
occur. CEP is also essential to future Internet of Things applications where streams of sensor data
must be processed in real time.
Conventional architectures are not fast or efficient enough for some applications because they usea "save-and-process" paradigm in which incoming data is stored in databases in memory or on
disk, and then queries are applied. When fast responses are critical, or the volume of incoming
information is very high, application architects instead use a "process-first" CEP paradigm, in which
logic is applied continuously and immediately to the "data in motion" as it arrives. CEP is more
efficient because it computes incrementally, in contrast to conventional architectures that reprocess
large datasets, often repeating the same retrievals and calculations as each new query is submitted.
Two forms of stream processing software have emerged in the past 15 years. The first were CEP
platforms that have built-in analytic functions such as filtering, storing windows of event data,
computing aggregates and detecting patterns. Modern commercial CEP platform products include
adapters to integrate with event sources, development and testing tools, dashboard and alertingtools, and administration tools. More recently the second form — distributed stream computing
platforms (DSCPs) such as Amazon Web Services Kinesis and open-source offerings including
Apache Samza, Spark and Storm — was developed. DSCPs are general-purpose platforms without
full native CEP analytic functions and associated accessories, but they are highly scalable and
extensible so developers can add the logic to address many kinds of stream processing
applications, including some CEP solutions.
User Advice:
■ Companies should use CEP to enhance their situation awareness and to build "sense-and-
respond" behavior into their systems. Situation awareness means understanding what is goingon so that you can decide what to do.
■ CEP should be used in operational activities that run continuously and need ongoing
monitoring. This can apply to fraud detection, real-time precision marketing (cross-sell and
upsell), factory floor systems, website monitoring, customer contact center management,
trading systems for capital markets, transportation operation management (for airlines, trains,
shipping and trucking) and other applications. In a utility context, CEP can be used to process a
Gartner, Inc. | G00264126 Page 57 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
combination of supervisory control and data acquisition (SCADA) events and "last gasp"
notifications from smart meters to determine the location and severity of a network fault, and
then to trigger appropriate remedial actions.
■ Companies should acquire CEP functionality by using an off-the-shelf application or SaaS
offering that has embedded CEP under the covers, if a product that addresses their particularbusiness requirements is available.
■ When an appropriate off-the-shelf application or SaaS offering is not available, companies
should consider building their own CEP-enabled application on an iBPMS, ESB suite or
operational intelligence platform that has embedded CEP capabilities.
■ For demanding, high-throughput, low-latency applications — or where the event processing
logic is primary to the business problem — companies should build their own CEP-enabled
applications on commercial or open-source CEP platforms (see examples of vendors below) or
DSCPs.
■ In rare cases, when none of the other tactics are practical, developers should write custom CEPlogic into their applications using a standard programming language without the use of a
commercial or open-source CEP or DSCP product.
Business Impact: CEP:
■ Improves the quality of decision making by presenting information that would otherwise be
overlooked.
■ Enables faster response to threats and opportunities.
■ Helps shield business people from data overload by eliminating irrelevant information and
presenting only alerts and distilled versions of the most important information.CEP also adds real-time intelligence to operational technology (OT) and business IT applications.
OT is hardware and software that detects or causes a change through the direct monitoring and/or
control of physical devices, processes and events in the enterprise. For example, utility companies
use CEP as a part of their smart grid initiatives, to analyze electricity consumption and to monitor
the health of equipment and networks.
CEP is one of the key enablers of context-aware computing and intelligent business operations.
Much of the growth in CEP usage during the next 10 years will come from the Internet of Things,
digital business and customer experience management applications.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Recommended Reading: "Use Complex-Event Processing to Keep Up With Real-time Big Data"
"Best Practices for Designing Event Models for Operational Intelligence"
Sliding Into the Trough
Big Data
Analysis By: Mark A. Beyer
Definition: Big data is high-volume, velocity and variety information assets that demand cost-
effective, innovative forms of information processing for enhanced insight and decision making.
Position and Adoption Speed Justification: Big data has crossed the Peak of Inflated
Expectations. There is considerable debate about this, but when the available choices for a
technology or practice start to be refined, and when winners and losers start to be picked, the worst
of the hype is over.
It is likely that big data management and analysis approaches will be incorporated into a variety of
existing solutions, while simultaneously replacing some of the functionality in existing market
solutions (see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending
Through 2016"). The market is settling into a more reasonable approach in which new technologies
and practices are additive to existing solutions and creating hybrid approaches when combined
with traditional solutions.
Big data's passage through the Trough of Disillusionment will be fast and brutal:
■
Tools and techniques are being adopted before expertise is available, and before they aremature and optimized, which is creating confusion. This will result in the demise of some
solutions and complete revisions of some implementations over the next three years. This is the
very definition of the Trough of Disillusionment.
■ New entrants into this practice area will create new, short-lived surges in hype.
■ A series of standard use cases will continue to emerge. When expectations are set properly, it
becomes easier to measure the success of any practice, but also to identify failure.
Some big data technologies represent a great leap forward in processing management. This is
especially relevant to datasets that are narrow but contain many records, such as those associated
with operational technologies, sensors, medical devices and mobile devices. Big data approachesto analyzing data from these technologies have the potential to enable big data solutions to
overtake existing technology solutions when the demand emerges to access, read, present or
analyze any data. However, inadequate attempts to address other big data assets, such as images,
video, sound and even three-dimensional object models, persist.
The larger context of big data is framed by the wide variety, and extreme size and number, of data
creation venues in the 21st century. Gartner clients have made it clear that big data technologies
Gartner, Inc. | G00264126 Page 59 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
must be able to process large volumes of data in streams, as well as in batches, and that they need
an extensible service framework to deploy data processes (or bring data to those processes) that
encompasses more than one variety of asset (for example, not just tabular, streamed or textual
data).
It is important to recognize that different aspects and varieties of big data have been around formore than a decade — it is only recent market hype about legitimate new techniques and solutions
that has created this heightened demand.
Big data technologies can serve as unstructured data parsing tools that prepare data for data
integration efforts that combine big data assets with traditional assets (effectively the first-stage
transformation of unstructured data).
User Advice:
■ Focus on creating a collective skill base. Specifically, skills in business process modeling,
information architecture, statistical theory, data governance and semantic expression arerequired to obtain full value from big data solutions. These skills can be assembled in a data
science lab or delivered via a highly qualified individual trained in most or all of these areas.
■ Begin using Hadoop connectors in traditional technology and experiment with combining
traditional and big data assets in analytics and business intelligence. Focus on this type of
infrastructure solution, rather than building separate environments that are joined at the level of
analyst user tools.
■ Review existing information assets that were previously beyond analytic or processing
capabilities ("dark data"), and determine if they have untapped value to the business. If they
have, make them the first, or an early, target of a pilot project as part of your big data strategy.
■ Plan on using scalable information management resources, whether public cloud, private cloud
or resource allocation (commissioning and decommissioning of infrastructure), or some other
strategy. Don't forget that this is not just a storage and access issue. Complex, multilevel,
highly correlated information processing will demand elasticity in compute resources, similar to
the elasticity required for storage/persistence.
■ Small and midsize businesses should address variety issues ahead of volume issues when
approaching big data, as variety issues demand more specialized skills and tools.
Business Impact: Use cases have begun to bring focus to big data technology and deployment
practices. Big data technology creates a new cost model that has challenged that of the data
warehouse appliance. It demands a multitiered approach to both analytic processing (manycontext-related schemas-on-read, depending on the use case) and storage (the movement of "cold"
data out of the warehouse). This resulted in a slowdown in the data warehouse appliance market
while organizations adjusted to the use of newly recovered capacity (suspending further costs on
the warehouse platform) and moving appropriate processing from a schema-on-write approach to a
schema-on-read approach.
In essence, the technical term "schema on read" means that if business users disagree about how
an information source should be used, they can have multiple transformations appear right next to
Page 60 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
infrastructure. While SAP Hana is leading the charge, with 3,000 customers with hundreds in
production, the addition of in-memory capabilities by all major players should further accelerate
adoption of IMDBMS technology during the next two years.
Many use cases are supported by IMDBMS. For example, solidDB and TimesTen were originally
developed for high-speed processing of streaming data for applications such as fraud detection,with the data then written to a standard DBMS for further processing. Others, such as Altibase,
Aerospike and VoltDB, focus on high-intensity transactional processing. Some IMDBMSs — such
as Exasol, ParStream or Kognitio — are dedicated to in-memory analytical use cases. Finally, the
ability to support both analytical and transactional (aka HTAP) use cases on a single copy of the
data is gaining traction in the market — led by SAP and now Microsoft, along with smaller emerging
players such as Aerospike or MemSQL.
The promise of the IMDBMS is to combine, in a single database, both the transactional and
analytical use cases without having to move the data from one to the other. It enables new business
opportunities that would not have been possible previously, by allowing real-time analysis of
transactional data. One example is in logistics, where business analysts can offer customersrerouting options for potentially delayed shipping proactively, rather than after the fact; hence,
creating a unique customer experience. Another example comes from online gambling, whereby
computing of the handicap could occur as a match is ongoing. To support such use cases, both the
transactional data and the analytics need to be available in real time. While analytical use cases
have seen strong adoption, for most organizations IMDBMS for HTAP technology remains three
years away.
User Advice:
■ Continue to use IMDBMS as a DBMS for temporary storage of streaming data where real-time
analysis is necessary, followed by persistence in a disk-based DBMS.■ IMDBMS for analytic acceleration is an effective way of achieving increased performance.
■ The single most important advancement is HTAP as a basis for new, previously unavailable
applications — taking advantage of real-time data availability, with IMDBMS for increased
performance and reduced maintenance. Organizations should monitor technology maturity and
identify potential business use cases to decide when to leverage this opportunity.
■ Vendor offerings are evolving fast and have various levels of maturity. Compare vendors from
both the technology and pricing perspectives.
Business Impact: These IMDBMSs are rapidly evolving and becoming mature and proven —
especially for reliability and fault tolerance. As the price of memory continues to decrease, thepotential for the business is transformational:
■ The speed of the IMDBMS for analytics has the potential to simplify the data warehouse model
by removing development, maintenance and testing of indexes, aggregates, summaries and
cubes. This will lead to savings in terms of administration, improved update performance, and
increased flexibility for meeting diverse workloads.
Page 62 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Recommended Reading: "Who's Who in In-Memory DBMSs"
"Cool Vendors in In-Memory Computing, 2013"
"Taxonomy, Definitions and Vendor Landscape for In-Memory Computing Technologies"
"SAP's Business Suite on Hana Will Significantly Impact SAP Users"
Content Analytics
Analysis By: Carol Rozwell; Rita L. Sallam
Definition: Content analytics is a family of technologies that process content — and the behavior of
users in consuming content — to derive answers to specific questions and find patterns that drive
action. Content types include text of all kinds, such as documents, blogs, news sites, customer
conversations (both audio and text), video, and interactions occurring on the social Web. Analytic
approaches include text analytics, graph analytics, rich media and speech analytics, video analytics,
as well as sentiment, emotional intent and behavioral analytics.
Position and Adoption Speed Justification: The multiplicity of applications and the diverse range
of analytical techniques and vendors indicate that content analytics is still emerging. Some
techniques such as text analytics are relatively mature, while there is a great deal of hypesurrounding some deployments of content analytics, such as sentiment analysis, and the use of
other techniques — such as emotional analysis and video analytics — is still very nascent.
Use of both general- and special-purpose content analytics applications continues to grow, whether
they are procured as stand-alone applications or added as extensions to search and content
management applications. The greatest growth comes from generally available content resources,
such as social data, public news feeds and documents, open data, contact center records and
Gartner, Inc. | G00264126 Page 63 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
post-sale service accounts. This leads to heavy uptake in CRM. Additionally, open-source
intelligence seeks to use content analytics for better understanding of public and semipublic
sentiment. However, other areas, such as HR, are leveraging content to optimize organizational
efficiency and hiring. Specific industries such as healthcare, life sciences, utilities and transportation
are leveraging insights across content and structured data to optimize processes and business
models. Software as a service (SaaS) vendors are emerging, offering APIs to let snippets of content
be programmatically sent to and analyzed in the cloud. This is an important development and will
help speed up adoption.
Another factor driving the interest in content analytics is the huge volume of information available to
be analyzed and the speed with which it changes.
User Advice: Enterprises should employ content analytics to automate the data preparation
process. It can replace time-consuming, manual and complex human analyses, such as reading,
summarizing and suggesting actionable insight in service records or postings resident in social
media. Look for opportunities to combine these new insights with analysis from traditional,
enterprise and other structured data sources to enhance existing analytics processes or to createnew ones. Firms should identify the analytics functions that are most able to simplify and drive new
intelligence into complex business and analytic processes. Users should identify vendors with
specific products that meet their requirements, and should review customer case studies both
within and outside their industries to understand how others have exploited these technologies. An
oversight group can support application sharing, monitor requirements, and understand new
content analytics to identify where they can improve key performance indicators (KPIs) and use the
content analysis result as input to the predictive analytic model. Appropriate groups for such roles
may already exist. They might already be devoted to associated technologies or goals, such as
content management, advanced analytics, social software, people-centered computing, or specific
business application categories such as marketing, CRM, security or worker productivity. Social
networking applications can be used to deliver information, gain access to customers andunderstand public opinion that may be relevant. New skills and tools (such as in linguistics, natural-
language processing, image processing and machine learning) beyond those needed for traditional
business intelligence will be required.
It is important to note that there are risks in assuming that content analytics can effectively
substitute human analysis. In some cases, false signals may end up requiring more human effort to
sort out than more rudimentary monitoring workflows. The best practice is to optimize the balance
between automation and oversight. Until the tools mature, experts in the field of what the tool is
analyzing will be required to provide advice in context.
Business Impact: Content analytics is used to support and enhance a broad range of analyticfunctions. It can:
■ Provide new insights into analytic processes to identify high-priority clients, the next best
action, product problems, and customer sentiment and service problems
■ Analyze competitors' activities and consumers' responses to a new product
■ Support security and law enforcement operations by analyzing photographs
Page 64 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Relate effective treatments to outcomes in healthcare
■ Detect fraud by analyzing complex behavioral patterns
■ Optimize asset management through preventative and predictive maintenance
Complex results can be represented as visualizations and embedded in analytic applications,making them easier for people to understand and take action.
Recommended Reading: "Use Search and Content Analytics to Increase Sales"
"How to Expand Your Information Infrastructure for Analytics With Content"
"Cool Vendors in Content and Social Analytics, 2014"
"How Crowdsourcing Can Reduce the Reliability of Social Media Analytics"
"Three Ways to Improve Your Content and Social Analytics"
Hybrid Cloud Computing Analysis By: David W. Cearley; Donna Scott
Definition: Gartner defines hybrid cloud computing as the coordinated use of cloud services across
isolation and provider boundaries among public, private and community service providers, or
between internal and external cloud services. Like a cloud computing service, a hybrid cloud
computing service is scalable, has elastic IT-enabled capabilities, self-service interfaces and is
delivered as a shared service to customers using Internet technologies. However, a hybrid cloud
service crosses isolation and provider boundaries.
Position and Adoption Speed Justification: Hybrid cloud computing is the coordinated use of
cloud services across isolation and provider boundaries among public, private and communityservice providers, or between internal and external cloud services. Hybrid cloud computing does
not refer to using internal systems and external cloud-based services in a disconnected or loosely
connected fashion. Rather, it implies significant integration or coordination between the internal and
external environments at the data, process, management or security layers.
Virtually all enterprises have a desire to augment internal IT systems with those of cloud services for
various reasons, including for capacity, financial optimization and improved service quality. Hybrid
Gartner, Inc. | G00264126 Page 65 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
cloud computing can take a number of forms. The following approaches can be used individually or
in combination to support a hybrid cloud computing approach within and across the various layers
— for example, infrastructure as a service (IaaS), platform as a service (PaaS) and software as a
service (SaaS):
■ Joint security and management — Security and/or management processes and tools areapplied to the creation and operation of internal systems and external cloud services.
■ Workload/service placement and runtime optimization — Using data center policies to drive
placement decisions to resources located internally or externally, as well as balancing resources
to meet SLAs, such as for availability and response time.
■ Cloudbursting — Dynamically scaling out an application from an internal, private cloud platform
to an external public cloud service based on the need for additional resources.
■ Development/test/release — Coordinating and automating development, testing and release to
production across private, public and community clouds.
■ Availability/disaster recovery (DR)/recovery — Coordinating and automating synchronization,
failover and recovery between IT services running across private, public and/or community
clouds.
■ Cloud service composition — Creating a solution with a portion running on internal systems,
and another delivered from the external cloud environment in which there are ongoing data
exchanges and process coordination between the internal and external environments.
■ Dynamic cloud execution — The most ambitious form of hybrid cloud computing combines joint
security and management, cloudbursting and cloud service compositions. In this model, a
solution is defined as a series of services that can run in whole or in part on an internal private
cloud platform or on a number of external cloud platforms, in which the software execution(internal and external) is dynamically determined based on changing technical (for example,
performance), financial (for example, cost of internal versus external resources) and business
(for example, regulatory requirements and policies) conditions.
We estimate no more than 20% of large enterprises have implemented hybrid cloud computing
beyond simple integration of applications or services. This declines to 10% to 15% for midsize
enterprises, which mostly are implementing the availability/disaster recovery use case. Most
companies will use some form of hybrid cloud computing during the next three years. Some
organizations are implementing cloud management platforms (CMPs) to drive policy-based
placement and management of services internally or externally. A fairly common use case is in the
high availability (HA)/DR arena where data is synchronized from private to public or public to privatefor the purposes of resiliency or recovery. A less common but growing use case (due to
complexities of networking and latency) is cloudbursting. The grid computing world already
supports hybrid models executing across internal and external resources, and these are
increasingly being applied to cloud computing. More sophisticated, integrated solutions and
dynamic execution interest users, but are beyond the current state of the art.
Positioning has advanced significantly in a year (from peak to postpeak midpoint) as organizations
leverage and embrace the public cloud into their business processes and internal services, and
Page 66 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: Gamification is the use of game mechanics and experience design to digitally engage
and motivate people to achieve their goals. Gartner has recently redefined gamification; in this new
definition, it is distinguished by its digital engagement model and the focus on motivating players to
achieve their goals (see "Redefine Gamification to Understand Its Opportunities and Limitations").
Position and Adoption Speed Justification: In the 2014 Hype Cycle, gamification has moved fromthe Peak of Inflated Expectations in 2013, to begin the entry into the Trough of Disillusionment
today. According to Google Trends, over the past year, the hype surrounding gamification overall
has leveled, and the number of critics of gamification is increasing. But client inquiries indicate the
focus for gamification has clearly shifted from being primarily consumer-facing and marketing-
driven, to becoming primarily an enterprise concern with a focus both internal and external to the
organization. Internal to organizations, gamification is being used in recruiting, onboarding, training,
wellness, collaboration, performance, innovation, change management and sustainability. This trend
is set to accelerate as larger vendors, such as salesforce.com, begin to integrate game mechanics
and analytics into their software offerings. In addition to externally focused solutions targeting
customers or communities of interest, there are also an increasing number of gamification solutions
focusing on specific communities of interest, particularly in civic, health and innovation areas.
Gamification leaders such as Nike, Khan Academy and Quirky demonstrate that gamification can
have a huge positive impact on engagement when applied in a suitable context. However,
gamification has significant challenges to overcome before widespread success occurs. Designing
a gamified solution is no easy task — successful solutions are focused on enabling players to
achieve their goals. Player goals and organizational goals must be aligned, and only then can the
organizational goals be achieved as a consequence of players achieving their goals. Successful
gamified solutions design an experience for players that takes them on a journey to achieving their
goals. Designing for engagement (rather than for efficiency) is a new skill, and one that is in short
supply in IT organizations. This will hinder the development of the trend over the next three years.
User Advice: Gamification builds motivation into a digital engagement model, and can be used to
add value to products and to deepen relationships by changing behaviors, developing skills or
driving innovation. The target audiences for gamification are customers, employees and
communities of interest.
Organizations planning to leverage gamification must clearly understand the goals of the target
audience they intend to engage, how those goals align with organizational goals and how success
will be measured.
Gamification technology comes in three forms:
■ General-purpose gamification platforms delivered as SaaS that integrate with custom-
developed and vendor-supplied applications
■ Purpose-built solutions supplied by a vendor to support a specific usage (for example,
innovation management or service desk performance)
■ Purely custom implementations
Page 68 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Organizations must recognize that simply including game mechanics is not enough to realize the
core benefits of gamification. Making gamified solutions sufficiently rewarding requires careful
planning, design and implementation, with ongoing adjustments to keep users interested. Designing
gamified solutions is unlike designing any other IT solution, and it requires a different design
approach. Few people have gamification design skills, which remains a huge barrier to success in
gamified solutions.
Organizations are beginning to use gamification as a means to motivate employees and customers.
Implementing gamification means matching player goals to target business outcomes, in order to
engage people on an emotional level, rather than on a transactional level.
Business Impact: Gamification can increase the effectiveness of an organization's digital business
strategy. It provides a means of packaging motivation and delivering it digitally to add value to
products and relationships. While many of the concepts in gamification have been around for a long
time, the advantage of a digital engagement model is that it scales to virtually any size, with very
low incremental costs. Its use is relevant, for example, to marketing managers, product designers,
customer service managers, financial managers and HR staff, whose aim is to bring about longer-lasting and more-meaningful interactions with customers, employees or the public.
Although gamification can be beneficial, it's important to design, plan and iterate on its use to avoid
the negative business impacts of unintended consequences, such as behavioral side effects or
gamification fatigue.
User engagement is at the heart of today's "always connected" culture. Incorporating game
mechanics encourages desirable behaviors, which can, with the help of carefully planned scenarios
and product strategies, increase user participation, improve product and brand loyalty, advance
learning and understanding of a complex process, accelerate change adoption, and build lasting
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Augmented Reality
Analysis By: Tuong Huy Nguyen; CK Lu
Definition: Augmented reality (AR) is the real-time use of information in the form of text, graphics,
audio and other virtual enhancements integrated with real-world objects and presented using aheads-up display (HUD)-type display or projected graphics overlays. It is this "real world" element
that differentiates AR from virtual reality. AR aims to enhance users' interaction with the
environment, rather than separating them from it.
Position and Adoption Speed Justification: Although verticals such as automotive and military
have been using AR for many years, this technology entered the mainstream driven by the interest
and proliferation of mobile devices and geolocation services. Recent focus has shifted back to
vision-based identification AR. This technology supplements location-dependent AR and provides
additional use-case scenarios.
A growing number of brands, retailers, manufacturers and companies in various verticals have
shown interest in, or are using, AR to enhance internal and/or external business processes. Hype
around AR has stabilized. This has allowed more companies to look beyond the initial hype to
explore AR's potential to provide business innovation, enhance business processes and provide
high value to external clients. The biggest challenge for external-facing AR is gimmicky
implementations — solutions that provide the consumer no value. This will potentially limit
consumer interest and adoption in the technology. Internal-facing implementations have better
potential for adoption to bring business value because they won't be hindered by consumer
preferences. Advancement of heads-up display will further encourage use of AR as an enterprise
tool.
Beyond audience-based challenges, a number of factors will continue to hinder AR adoption.
■ Rigorous device requirements restrict the information that can be conveyed to the end user.
Cloud computing initiatives will alleviate some of this burden
■ Data costs for always-on connectivity.
■ Privacy concerns for both location and visual identification-based AR.
■ Standardization for browsers' data structure.
User Advice:
■ Communications service providers: Examine whether AR would enhance the user experience
of your existing services. Compile a list of AR developers with which you could partner, ratherthan building your own AR from the ground up. Provide end-to-end professional services for
specific vertical markets, including schools, healthcare institutions and real estate agencies, in
which AR could offer significant value.
■ Mobile device manufacturers: Recognize that AR provides an innovative interface for your
mobile devices. Open discussions with developers about the possibility of preinstalling
application clients on your devices and document how developers can access device features.
Page 70 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ AR developers: Take a close look at whether your business model is sustainable, and consider
working with CSPs or device manufacturers to expand your user base; perhaps by offering
white-label versions of your products. Integrate AR with existing tools, such as browsers or
maps, to provide an uninterrupted user experience.
■ Providers of search engines and other Web services: Get into AR as an extension of yoursearch business. AR is a natural way to display search results in many contexts.
■ Mapping vendors: Add AR to your 3D map visualizations.
■ Early adopters: Examine how AR can bring value and ROI to your organization and your
customers by offering branded information overlays. For workers who are mobile (including
factory, warehousing, maintenance, emergency response, queue-busting or medical staff),
identify how AR could deliver context-specific information at the point of need or decision.
■ Brands, marketers and advertisers: Use AR to bridge your physical and digital marketing
assets and drive increased engagement with your user base.
Business Impact: AR is used to bridge the digital and physical world. This has an impact on both
internal- and external-facing solutions. For example, internally, AR can provide value by enhancing
training, maintenance and collaboration efforts. Externally, it offers brands, retailers, marketers and
the ability to seamlessly combine physical campaigns with their digital assets.
CSPs and their brand partners can leverage AR's ability to enhance the user experience within their
location-based service (LBS) offerings. This can provide revenue via set charges, recurring
subscription fees or advertising. Handset vendors can incorporate AR to enhance UIs, and use it as
a competitive differentiator in their device portfolio. The growing popularity of AR opens up a market
opportunity for application developers, Web services providers and mapping vendors to provide
value and content to partners in the value chain, as well as an opportunity for CSPs, handset
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: Managed machine to machine (M2M) communication services encompass integrated
and managed infrastructure, application and IT services to enable enterprises to connect, monitor
and control business assets and related processes over a fixed or mobile connection. Managed
M2M services contribute to existing IT and/or are operations technology (OT) processes. M2M
communication services are the connectivity services for many Internet of Things (IoT)
implementations.
Position and Adoption Speed Justification: M2M technology continues to fuel new business
offerings and support a wide range of initiatives, such as smart meters, road tolls, smart cities,
smart buildings and geofencing assets, to name a few.
The key components of an M2M system are:
■ Field-deployed wireless devices with embedded sensors or radio frequency identification (RFID)
technology
■
Wireless and wireline communication networks, including cellular communication, Wi-Fi,ZigBee, WiMAX, generic DSL (xDSL) and fiber to the x (FTTx) networks
■ A back-end network that interprets data and makes decisions (for example, e-health
applications are also M2M applications)
There are currently few service providers than can deliver end-to-end M2M services. The value
chain remains fragmented. Service providers are trying to partner with others to create a workable
ecosystem.
M2M services are currently provided by three types of provider:
■ M2M service providers. Mobile virtual network operators and companies associated with an
operator that can piggyback on that operator's roaming agreements (for example, Wyless, Kore
Telematics and Jasper Wireless).
■ Communications service providers (CSPs). Some CSPs, such as Orange in Europe and AT&T
in North America, have quietly supplied M2M services for several years. However, CSPs are
now marketing M2M services more vigorously, and those without a strong M2M presence so far
are treating it more seriously by increasing their marketing or creating dedicated M2M service
divisions (for example, T-Mobile, Telenor and Vodafone).
■ M2M service aggregators. These encompass traditional outsourcers and emerging players
that bundle connectivity into systems resale and integration (such as Modus Group or Integron).
One of the key technology factors that may affect M2M service deployment is the capability tosupport mobile networks. Early M2M services were smart meters, telematics and e-health monitors,
which are expected to be widely used in the future. In its Release 10, the Third Generation
Partnership Project (3GPP) worked on M2M technology to enhance network systems in order to
offer better support for machine-type communications (MTC) applications. The 3GPP's TS 22.368
specification describes common and specific service requirements for MTC. The main functions
specified in Release 10 are overload and congestion control, and the recently announced Release
11 investigates additional MTC requirements, use cases and functional improvements to existing
Page 72 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
specifications. End-to-end real-time security will also become an important factor when more
important vertical applications are brought into cellular networks.
Another key factor on the technology side that may impact mass deployment of M2M
communication services is the level of standardization. Some key M2M technology components
(RFID, location awareness, short-range communication and mobile communication technologies,for example) have been on the market for quite a long time, but there remains a lack of the
standardization necessary to make M2M services cost-effective and easy to deploy, therefore
enabling this market to take off. M2M standardization may involve many technologies (such as the
Efficient XML Interchange [EXI] standard, Constrained Application Protocol [CoAP] and Internet
Protocol Version 6 over Low-Power Wireless Personal Area Networks [IPv6/6LoWPAN]) and
stakeholders, including CSPs, RFID makers, telecom network equipment vendors and terminal
providers. The European Telecommunications Standards Institute has a group working on the
definition, smart-metering use cases, functional architecture and service requirements for M2M
technology.
We expect that M2M communication services will be in the Trough of Disillusionment in 2015.Procurement teams will perceive that prices are too high and the space unnecessarily complex (for
example, roaming or multi-country implementations) — especially when contrasted to consumer/
wearables IoT that will use the smartphone as a gateway to the Internet.
User Advice: As M2M communications grow in importance, regulators should pay more attention
to standards, prices, terms and conditions. For example, the difficulty of changing operators during
the life of equipment with embedded M2M technology might be seen by regulators as potentially
monopolizing. Regulators in France and Spain already require operators to report on M2M
connections, and we expect to see increased regulatory interest elsewhere.
For the end user, the M2M market is very fragmented because no single end-to-end M2M providerexists. A number of suppliers offer enterprise users monitoring services, hardware development,
wireless access services, hardware interface design and other functions. As a result, an adopter has
to do a lot of work to integrate the many vendors' offerings. On top of this, business processes may
need redefining.
While M2M is usually part of a closed loop OT environment run by engineering, it could be
facilitated and exploited by an aligned IT and OT approach. In some cases, M2M may be deployed
and supported by IT departments with adequate skills and understanding.
An enterprise's M2M technology strategy needs to consider the following issues:
■ Scope of deployment
■ System integration method
■ Hardware budget
■ Application development and implementation
■ Wireless service options
Gartner, Inc. | G00264126 Page 73 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
furthered the cause of device interoperability. Over this past year, we have seen an increased
interest in mobile health monitoring, driven by a number of factors:
■ Smartphone use by adults is at an all-time high.
■ The increased burden of chronic disease in emerging markets, many of which have poorlandline coverage and better mobile coverage, is generating interest from government
healthcare agencies in deploying mobile versions of home health monitoring devices.
■ Interest is growing among healthcare delivery organizations (HDOs) in developed and emerging
markets in using mobility to overcome the "location dependence" limitation of home health
monitoring technologies. The use of portable or wearable devices opens the possibility of
monitoring active, mobile patients continually and in real time.
■ The popularity of personal health record (PHR) applications is enabling healthcare consumers to
create Web-based healthcare data repositories that are able to accept data from health and
fitness monitoring devices.
■ The so-called "quantified self" is generating increasing fascination. Sports product
manufacturers, such as Adidas and Nike, are offering motion trackers that help create a better
jogging experience. Professional sports teams use a variety of dedicated sensors and devices
to measure the performance of team players. The widespread adoption of smartphones with
low-cost applications that enable mobile health monitoring is leading to growing interest among
healthcare consumers in self-monitoring.
■ Affordable, wireless gateways connect to standard home health monitors (weight, blood
glucose and blood pressure) and automate data collection and secure transmission.
■ Smartphone manufacturers will begin to incorporate biosensor and monitoring technologies into
the devices — making it easier for medical application developers to deploy their functionalityand less expensive for the consumer.
■ The number of wearable devices on the market with the potential to help both patients and
clinicians monitor vital signs and symptoms has increased dramatically.
■ Acceptance of cloud-based services by HDOs is increasing.
Despite growing interest, most deployments of mobile health monitoring are pilot projects. HDOs,
for the most part, are not yet convinced that the business case for mobile health monitoring is viable
and have not yet shown the organizational commitment to develop sustainable services on a large
scale. In 2012, Telcare (see "Cool Vendors in Healthcare Providers, 2012") began shipping U.S.
Food and Drug Administration (FDA)-cleared glucometers that automatically connect to a cellulardata network and integrate with Telcare's own website, payer call centers and the electronic health
records of healthcare providers. The ease of deployment of products such as Telcare should help
move some pilots to larger-scale, operational programs — in part driven by the fact that consumer
mobile blood glucose monitoring devices (such as iHealth's Wireless Smart Gluco-Monitoring
System) are gaining greater acceptance with consumers and employers. As mobile health
monitoring evolves and its clinical uses become more clearly defined, it will most likely fragment into
Gartner, Inc. | G00264126 Page 75 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
certain submarkets focused on particular clinical areas, such as obesity, chronic obstructive
pulmonary disease (COPD), diabetes and cardiac care.
User Advice: Whether mobile health monitoring pilots evolve into operational deployments depends
on the ability of HDOs to overcome certain obstacles, including legal and licensing restrictions,
inconsistent reimbursement by healthcare payers, and the reality that mobile health monitoring willrequire new staffing and workforce considerations and new business processes for dealing with
remotely generated patient data, as well as new ways of integrating this information into their
business and clinical systems.
HDOs should focus on the process and business issues raised by mobile health monitoring. It is
essential to develop the ability to manage large numbers of mobile devices and remote patients, to
change business and clinical processes to handle remotely generated patient data, and to change
the staffing model to be able to orchestrate time-critical interventions for patients.
HDOs should not rush to replace home health monitoring in favor of mobile health monitoring.
Mobile health monitoring will be used as a supplement or alternative to home health monitoring andwill likely be used to service certain types of patients (such as younger, more active and more tech-
savvy patients).
Business Impact: If deployed appropriately, mobile health monitoring will enable closer monitoring
and faster intervention in the care of certain groups of patients. Mobile health monitoring can
improve patient engagement, enhance the patient experience and increase adherence to care
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Definition: Cloud computing is a style of computing in which scalable and elastic IT-enabled
capabilities are delivered as a service using Internet technologies.
Position and Adoption Speed Justification: Cloud computing remains a very visible and hyped
term, but, at this point, it is approaching the Trough of Disillusionment. There are many signs of
fatigue, rampant cloudwashing and disillusionment (for example, highly visible failures). Cloudcomputing remains a major force in IT and is still increasing in messages from vendors. Every IT
vendor has a cloud strategy, although many aren't cloud-centric and some of their cloud strategies
are in name only. Users are changing their buying behaviors, and, although they are unlikely to
completely abandon on-premises models or source all complex, mission-critical processes as
services through the cloud in the near future, there is a movement toward consuming services in a
more cost-effective way and toward enabling capabilities not easily done elsewhere. Much of the
focus is on agility, speed and other non-cost-related benefits.
Cloud computing has been, and continues to be, one of the most hyped terms in the history of IT.
Its hype transcends the IT industry and has entered popular culture, which has had the effect of
increasing hype and confusion around the term. In fact, cloud computing hype is literally "off thecharts" as Gartner's Hype Cycle does not measure amplitude of hype (i.e., a heavily hyped term
such as cloud computing rises no higher on the Hype Cycle than anything else).
Although the hype has long since peaked, there is still a great deal of hype surrounding cloud
computing and its many relatives. Although the Hype Cycle does not measure amplitude, cloud still
has more hype than many other technologies that are actually at or near the Peak of Inflated
Expectations. Variations, such as private cloud computing and hybrid approaches, compound the
hype and reinforce that one dot on a Hype Cycle cannot adequately represent all that is cloud
computing.
The hype around cloud computing continues to evolve as the market matures. Initial hype aboutcost savings has now focused more on the business benefits that organizations would realize due
to a shift to cloud computing. While some organizations have realized some cost savings, more and
more are focusing on other benefits, such as agility, speed, time to market and innovation.
User Advice: User organizations must demand road maps for the cloud from their vendors. Users
should look at specific usage scenarios and workloads, map their view of the cloud to that of
potential providers and focus more on specifics than on general cloud ideas. Understanding the
service models involved is key.
Vendor organizations must begin to focus their cloud strategies on more specific scenarios and
unify them into high-level messages that encompass the breadth of their offerings. Differentiation in
hybrid cloud strategies must be articulated and will be challenging as all are "talking the talk," but
many are taking advantage of the even broader leeway afforded by the term. Cloudwashing should
be minimized.
Cloud computing involves many components, and some aspects are immature. Care must be taken
to assess maturity and assess the risks of deployment. Tools such as cloud service brokerages can
help.
Gartner, Inc. | G00264126 Page 77 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
release, Google enhanced the Android OS to eventually support all three modes specified by the
NFC Forum — another first. Currently, all the major smartphone OS vendors, with the notable
exception of Apple, provide native support for NFC:
■ Android: Acer, Asus, HTC, Huawei, Lenovo, LG, Motorola, Samsung, Sony and ZTE
■ BlackBerry OS: BlackBerry
■ Windows Phone: Nokia and Samsung
By embedding NFC in the smartphone platform, the hardware and software companies hope to
move beyond payments and provide the developer community with another tool to foster innovative
applications. Several smartphone and consumer electronics companies have been particularly
aggressive in exploring new NFC uses:
■ Samsung: It has highlighted several NFC use cases, such as video exchange, in its
commercials. Some of these use cases have been used to distinguish its Galaxy line of
smartphones from the iPhone.
■ Sony: It has introduced a complete line of consumer electronics devices, such as TVs, remote
controls, boomboxes, speakers and headsets, with NFC capabilities built in.
■ LG: It has expanded NFC capabilities into home appliances, such as refrigerators and vacuum
cleaners.
■ Nintendo: The inclusion of NFC in Wii U's GamePad enables NFC functionality for future game
play. Furthermore, the company is also planning on using it for digital payments.
■ Disney: Its NFC-enabled Disney Infinity line of games and toys and MyMagic+ wristband for its
theme parks met great commercial success in 2013.
NFC payment, however, with multiple parties that have differing interests and agendas, remains the
most complex and time-consuming application to implement. For the next few years, growth of
NFC will be primarily in smartphones and the surrounding digital ecosystem devices, such as
tablets, PCs, printers and TVs. For NFC to take off in payments, a compelling case must be made
for the merchants and the financial ecosystem to invest in the necessary infrastructure. The recent
introduction of Host Card Emulation (HCE) in the Android 4.4 (KitKat) may finally help NFC
payments finally get off the ground, at least for the supply side of the equation.
In other markets, NFC has started to get more traction. In transportation, proprietary contactless
technologies (such as NXP Semiconductors' Mifare) have dominated the market. New industry
organizations, such as the Open Standard for Public Transport (OSPT) Alliance, are now looking topromote standards-based NFC for the transportation application. In the enterprise, vendors such as
HID Global are now promoting NFC-based solutions for both physical access (for example, building
entry) and IT access (for example, server login). Finally, NFC is also starting to emerge in automotive
applications, including the following:
■ Bluetooth pairing of mobile devices to in-vehicle infotainment systems
Gartner, Inc. | G00264126 Page 79 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ Evaluate handheld and camera-based gesture recognition for potential business applications
involving controlling screen displays from a distance (the "lean-back" operating zone).
■ Evaluate wearable devices to see where they may be employed to enable new modes of
interaction.
■ Evaluate the emerging generation of desktop-oriented devices, and consider what role they
may play in the "lean-in" operating zone.
■ Consider how these may be combined with location-based information and augmented-reality
displays.
Even the simplest use of gesture, movement or touch can be introduced to existing products
(especially in the handheld space) to enhance the user experience.
Business Impact: The ability to interact and control without physical contact frees the user and
opens up a range of intuitive interaction opportunities, including the ability to control devices and
large screens from a distance. For smaller desktop, handheld and wearable devices, the ability tocontrol the device without physical contact opens up valuable possibilities in a variety of markets,
but especially in healthcare applications (where physical contact may result in the transfer of
infectious material). Gesture control also benefits the design aesthetics of touch-based devices,
allowing users to avoid unsightly fingerprints on their devices.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
In-memory analytics is no longer a fringe technology; it is increasingly becoming the dominant
performance layer for BI and analytic application architectures. The time taken to reach the Plateau
of Productivity was less than two years (previously this technology remained two to five years away
from the plateau for several years).
User Advice: For response-time issues and bottlenecks, IT organizations should consider theperformance improvement that in-memory analytics can deliver, especially when run on 64-bit
infrastructure. Users should be careful to use in-memory analytics as a performance layer and not
as a substitute for a data warehouse. In fact, users considering utilizing in-memory analytics should
also be aware of how their requirement for speedier query processing and analysis could be
addressed by the use of in-memory processing in the underlying databases feeding BI or via in-
memory databases or data grids.
BI and analytic leaders need to be aware that in-memory analytics technology has the potential to
subvert enterprise-standard information management efforts through the creation of in-memory
analytic silos. Where it is used in a stand-alone manner, organizations need to ensure they have the
means to govern its usage and that there is an unbroken chain of data lineage from the report to theoriginal source system, particularly for system-of-record reporting.
Finally, it is becoming apparent as the scale of in-memory analytics deployments grows,
performance tuning is still needed, either by the return of some aggregation at data load, or by
managing application design against user concurrency requirements and the sizing of hardware and
available RAM.
Business Impact: BI and analytic programs can benefit broadly from the fast response times
delivered by in-memory computing, and this in turn can improve the end-user adoption of BI and
analytics. The reduced need for database indexing and aggregation enables database
administrators to focus less on the optimization of database performance and more on value-addedactivities. Additionally, in-memory analytics by itself will enable better self-service analysis because
there will be less dependence on aggregates and cubes built in advance by IT.
However, from an analyst user perspective, faster queries alone are not enough to drive higher
adoption. In-memory analytics is of maximum value to users when coupled with interactive
visualization capabilities or used within data discovery tools for the highly intuitive, unfettered and
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Activity Streams
Analysis By: Nikos Drakos
Definition: An activity stream is a publish-and-subscribe notification mechanism and conversation
space typically found in social networking applications. It lists activities or events relevant to aperson, group or topic within the application. A participant subscribes to or "follows" entities, such
as other participants or business application objects, to track their related activities (a project
management application may add status information, for example), while a physical object
connected to the Internet may report its state, such as a flight delay.
Position and Adoption Speed Justification: Activity streams are popular in consumer social
networking sites such as Facebook and Twitter, as well as in enterprise social networking
applications. Activity streams aggregate notifications from the system as well as information about
the activities of other "followed" individuals ('likes', comments, shared items, for example). In
business environments, activity streams may also contain information about events that are pushed
into a stream from business applications. Activity streams have the potential to become a general-purpose mechanism for personalized dashboards through which to disseminate and filter
information; a mechanism for connecting groups and communities; and a rich "presence"
mechanism.
Beyond notifications and conversations, it is also possible to use live widgets or gadgets — for
example, a simple browser-based or mobile interactive application — to notify someone about an
event, as well as allow them to interact with that notification. For example, a notification about a
survey may include some data collection, or a notification about an expense report may contain
action buttons that can open the report or allow an authorized user to approve it. Activity streams
populated with automated notifications provide a simple mechanism that can stimulate and focus
conversations around specific events, as well as broaden visibility and participation across different
groups.
Activity streams can be exposed within many contexts, including various places within a social
network site (for example, a profile, group or topic page); or they can be embedded within an
application (for example, an email sidebar or beside a business application record).
User Advice: Tools that help individuals expand their "peripheral vision" with little effort can be
useful. Being able to choose to be notified about the ideas, comments or activities of others on the
basis of who they are or the strength of a relationship is a powerful mechanism for managing
information from an end user's perspective. Unlike email, with which the sender may miss
interested potential recipients or overload uninterested ones, publish-and-subscribe notification
mechanisms such as activity streams enable recipients to fine-tune and manage more effectivelythe information they receive.
Activity streams should be assessed in terms of their relevance as general-purpose information
access and distribution mechanisms. Most established enterprise software vendors, as well as
many specialist ones, have introduced activity streams in their products, and it is important to be
ready to understand their implications in terms of business value, cost and risk.
Page 86 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
Recommended Reading: "Boost Collaboration With 'Social Everywhere' Application Architectures"
Enterprise 3D Printing
Analysis By: Marc Halpern; Zalak Shah
Definition: 3D printing is an additive technique that uses a device to create physical objects fromdigital models. "Enterprise" refers to private- or public-sector organizations' use of 3D printing for
product design, development and prototyping, as well as educational institutions at all levels.
Enterprise 3D printing also includes the use of 3D printers in a manufacturing process to produce
finished goods.
Position and Adoption Speed Justification: 3D printing technologies have been available for
product prototyping and short-run parts manufacturing for almost 30 years. Yet, enterprise 3D
Gartner, Inc. | G00264126 Page 87 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
printing is still an adolescent market, with 5% to 20% market penetration, characterized by evolving
technology capabilities, methodologies and associated infrastructure and ecosystems despite the
age of the technology.
Before now, it was primarily used for prototyping new designs. The pace of adopting 3D printing for
a broader range of enterprise activities was slow until now because the cost of printers was toohigh, and before recent years, 3D printers could not print parts that had structural strength suitable
for most mechanical use.
Today, manufacturers are beginning to seriously consider, and in some cases, already are using, 3D
printing for manufacturing new and replacement parts, as well as the tools, jigs and fixtures used in
the manufacturing or assembly of other finished goods.
Today, while the range of materials that can be 3D printed is narrow and very slowly expanding,
enterprises are evaluating the "cross-over" point when the total cost of long-run, traditionally
manufactured parts is less than the total cost of short-run 3D printing items. While the material
range, finished-part quality and total cost factor into the enterprise's decision making, so too doesthe recognition that some innovative new designs with unusual or complex geometry can be
produced with 3D printing and are difficult or impossible to produce with any of the traditional
manufacturing technologies.
As the technology continues to develop, providers are introducing lower-cost devices with better
functionality and a wider range of materials to choose from. The 3D printers at this end of the
market have become more-common, practical office and lab devices that, in some cases, fit on a
desktop.
User Advice:
■ Experts knowledgeable in 3D printing materials must validate that mechanical characteristics of
3D printed parts are suitable for their intended use.
■ Those responsible for managing the manufacturing costs of produced parts must weigh the
trade-offs between printing 3D parts versus employing traditional manufacturing approaches.
■ Enterprises must consider use of 3D printing to create the jigs, fixtures and cutting tools used
as part of traditional manufacturing processes. If printing parts is prohibitive, use of 3D printing
to produce such factory tooling can make an enterprise's manufacturing operations more cost-
effective and responsive.
■ Those responsible for service and repairs should consider use of 3D printing to produce
replacement parts. This can be particularly cost-effective if original parts were very expensiveor, for old equipment, the spare parts are no longer available.
■ Those responsible for 3D printing strategy should ensure that users are adequately trained in 3D
modeling techniques needed to produce parts and products via 3D printers.
Business Impact:
Page 88 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
■ 3D printing makes creation of unique customer products more scalable across many
manufacturing industries. This is particularly true for dental products and medical devices. It
facilitates co-creation of products with end customers.
■ 3D printing replacement and spare parts can significantly reduce the amount of inventory and
warehouse space that enterprises need to maintain. It would also extend the lifetime ofproducts because replacement parts or parts needed for upgrades could be 3D printed.
■ 3D printing tools, jigs and fixtures can reduce manufacturing costs and make manufacturers
more agile to deliver to customers faster.
■ Plummeting prices of the low-end, consumer-focused material extrusion 3D printers producing
plastic items will encourage enterprises to use them in the creation of prototypes by product
development groups. Use of more and cheaper prototypes improves design for
manufacturability and overall product quality.
■ 3D printing could potentially increase concerns about intellectual property theft across
manufacturers.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: 3D Systems; EnvisionTEC; Eos Systems; ExOne; Formlabs; Mcor Technologies;
Stratasys
Recommended Reading: "Cool Vendors in 3D Printing, 2014"
"Use the Gartner Business Model Framework to Determine the Impact of 3D Printing"
"How 3D Printing Disrupts Business and Creates New Opportunities"
"3D Photo Booth Will Help Drive Awareness and Momentum for 3D Printing"
3D Scanners
Analysis By: Marc Halpern
Definition: A 3D scanner is a device used across industrial and consumer enterprises, including
retail, that captures data about the shape and appearance of real-world objects to create 3Dmodels of them. A 3D scanner captures the characteristics of the object, ranging from products and
facilities to human body shapes including bones, teeth, and ears (for example, for fitting hearing
aids), and converts them into digital form.
Position and Adoption Speed Justification: Gartner began seeing the use of 3D scanners among
manufacturers during the late 1990s. The earliest users adopted 3D scanners to reverse engineer
designs, create medical devices such as custom hearing aids, and do quality control of
Gartner, Inc. | G00264126 Page 89 of 98
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
manufactured parts. "Clouds of points" from scanned parts were compared with 3D models built
with nominal dimensions to see the fit of points on actual parts to the idealized geometry. Software
companies created an innovation called the "soft gauge," which added tolerance zones to the
computer-aided design (CAD) models so that checks could be made that the points lie within the
tolerance zones. Manufacturers have also used scanners to scan factories in order to create 3D
models of those factories. Those models help them plan upgrade construction projects.
3D scanners are finding a consumer market by enabling users, who may not have access to CAD or
3D modeling software or who may not be proficient in their use, to more easily create a CAD
drawing of the item by starting with a file that replicates the original. Proficiency in the CAD software
tools normally used to create files for 3D printing can be difficult for many people. Scanners with
capabilities that are well-suited to consumers and many enterprises are available from $3,000 to as
little as $600. Continued technological advancements, improved functionality and price decreases
in 3D scanners will mean consumers can justify a modest expenditure to try 3D image capture and
3D printing. With the technology becoming less expensive and relatively simple to use, consumers
and enterprises are purchasing more.
User Advice: Users must optimize the density of scanned points to capture detail needed on
scanned parts but not make clouds of points too dense. This is particularly the case for scans of
large volumes as in the case of scanning a factory.
Scanner, camera, and 2D and 3D printer manufacturers must continue research and development
work aimed at improving 3D scanner price, usability and performance. The 3D printer technology
providers, in particular, must ensure scanners enable consumers and enterprises to easily create
the files that can be used to print 3D output on their devices.
Educational institutions must use low-cost 3D scanners not only for engineering and architectural
courses as a complement to traditional design programs, but also in creative arts programs (forinstance, to enable students to artistically modify items from nature). Manufacturing enterprises
must explore use of 3D scanning technology in product design, rapid prototyping and reverse-
engineering. Whether in an enterprise or an educational institution, 3D scanners must be used in
conjunction with design and creative programs that employ 3D printers to produce physical output
from CAD software and other similar software.
Business Impact: Practical uses for 3D scanners will continue growing as their features improve
and prices decline. Sales will grow as their use becomes more widespread, driving down purchase
costs and enabling more enterprises and consumers to justify their purchase.
The commercial market for 3D scanning and printing applications will continue expanding into
architectural, education, engineering, geospatial, medical and short-run manufacturing. In the
"maker" and consumer markets, scanners must have a lower cost before they will enjoy widespread
acceptance for artistic endeavors, custom or vanity applications (such as "fabbing" [the
manufacture of one-off parts] and the modeling of children's toys, pets and gamers' avatars).
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Page 90 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
becoming more mature, and vendors have made significant investments in building the expertise,
resources and partnerships that can help companies accelerate their vehicle ICT launches.
Furthermore, vehicle manufacturers and device manufacturers must differentiate between core,
vehicle-centric telematics offerings that are embedded in a vehicle (most safety and security
applications) and personal telematics offerings (primarily information and entertainment services),
which consumers access by integrating portable devices with the vehicle.
To enable device-to-vehicle and service-to-vehicle integration concepts, vehicle manufacturers
must collaborate with consumer electronics companies, service and content providers (regarding
interfaces), and connectivity solutions. The introduction of electric vehicles (EVs) will give consumer
telematics a boost, because seamless EV ownership experiences will greatly benefit from
connected data services (for example, finding the next charging station and informing drivers of the
available range left).
Automotive companies should consider their choices in growing the connected-vehicle ecosystem
by identifying best-of-breed technology providers, instead of a single-solution approach. Both
options have their benefits and disadvantages; however, with increasing in-house expertise for theconnected vehicle, automotive companies can be more selective in their partner choices to better
balance innovation and cost objectivity factors (for example, innovation in connected-vehicle
offerings should reside with the automakers).
Business Impact: Consumer telematics provides an opportunity to differentiate product and brand
values (for example, infotainment access and human-machine interface experience) and to excel in
new or complemented customer experiences, to create new revenue sources (for example,
preferred listings for infotainment content), to collect vehicle-related quality and warranty
information via remote diagnostics, and to capture consumer insights.
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Entering the Plateau
Speech Recognition
Analysis By: Adib Carl Ghubril
Definition: Speech recognition systems convert human speech into text or machine instructions.
Position and Adoption Speed Justification: Speech recognition has gained the momentum it
needs to move more rapidly toward mainstream adoption as vendors recognize its value in
enriching touch and in-air gesture interactions. Speech is a primary form of human interaction and is
now deemed crucial in enabling the notion of users doing what is "natural."
With the top cloud service providers — IBM, Microsoft, Google, Amazon, Apple and Samsung — all
mobilizing resources in speech recognition systems, the number of applications making use of
speech recognition is rising. Apple's purchase of Novauris signals a plan to improve the
responsiveness of Siri (Apple's speech recognition engine) by bringing some speech processingback from the cloud and on to the local mobile computing platform. Microsoft, Nuance and others
are also tackling dialects and tonal languages.
Dictation, browsing and menu navigation are becoming readily available across PC and mobile
platforms. In fact, vendors are now developing systems that recognize dialects in addition to
language. Indeed, pattern-matching algorithms have given way to stochastic models (for example,
Markov's models) that are now about to be replaced by a hierarchical approach of layered neural
networks called "deep neural networks" (DNNs), demonstrating the kind of performance
improvement that could bring speech recognition to the required productivity level.
Better noise filtering also has allowed significant improvements in speech recognition in the cabin ofthe car, and this speech recognition technology is now available in midmarket vehicles.
User Advice: Speech recognition is still very susceptible to the system's immediate surroundings —
environmental noise and distance between the user and the microphone dramatically affect
performance. Furthermore, cloud-based systems hamper response time, affecting transcription
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
For mobile devices, focus initial applications on selecting from lists of predefined items, such as city
names, company names or musical artists. This is where speech recognition has the strongest
value-add by avoiding scrolling embedded lists while maintaining a high level of accuracy.
Business Impact: Speech recognition for telephony and contact center applications enables
enterprises to automate call center functions, such as travel reservations, order status checking,ticketing, stock trading, call routing, directory services, auto attendants and name dialing.
Additionally, it is used to enable workers to access and control communications systems, such as
telephony, voice mail, email and calendaring applications, using their voices. Mobile workers with
hands-busy applications, such as warehousing, can also benefit from speech data entry.
For some users, speech input can provide faster text entry for office, medical and legal dictation,
particularly in applications in which speech shortcuts can be used to insert commonly repeated text
segments (for example, standard contract clauses).
For mobile devices, applications include name dialing, controlling personal productivity tools,
accessing content (such as MP3 files) and using voice-mail-to-text services. Finally, carmakerssupporting the control of infotainment and telemetry systems using speech recognition would be
This research note is restricted to the personal use of [email protected]
This research note is restricted to the personal use of [email protected]
Hype Cycle Phases, Benefit Ratings and Maturity Levels
Table 1. Hype Cycle Phases
Phase Definition
Innovation Trigger A breakthrough, public demonstration, product launch or other event generates significantpress and industry interest.
Peak of Inflated
Expectations
During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicizedactivity by technology leaders results in some successes, but more failures, as thetechnology is pushed to its limits. The only enterprises making money are conferenceorganizers and magazine publishers.
Trough of
Disillusionment
Because the technology does not live up to its overinflated expectations, it rapidly becomesunfashionable. Media interest wanes, except for a few cautionary tales.
Slope of
Enlightenment
Focused experimentation and solid hard work by an increasingly diverse range oforganizations lead to a true understanding of the technology's applicability, risks andbenefits. Commercial off-the-shelf methodologies and tools ease the development process.
Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools andmethodologies are increasingly stable as they enter their second and third generations.Growing numbers of organizations feel comfortable with the reduced level of risk; the rapidgrowth phase of adoption begins. Approximately 20% of the technology's target audiencehas adopted or is adopting the technology as it enters this phase.
Years to Mainstream
Adoption
The time required for the technology to reach the Plateau of Productivity.
Source: Gartner (July 2014)
Table 2. Benefit Ratings
Benefit Rating Definition
Transformational Enables new ways of doing business across industries that will result in major shifts in industrydynamics
High Enables new ways of performing horizontal or vertical processes that will result in significantlyincreased revenue or cost savings for an enterprise
Moderate Provides incremental improvements to established processes that will result in increased revenueor cost savings for an enterprise
Low Slightly improves processes (for example, improved user experience) that will be difficult totranslate into increased revenue or cost savings
Source: Gartner (July 2014)
Page 96 of 98 Gartner, Inc. | G00264126
8/19/2019 Report Hype Cycle for Emerging Tech 264126