Oppenheimer & Co. Inc. does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in making their investment decision. See "Important Disclosures and Certifications" section at the end of this report for important disclosures, including potential conflicts of interest. See "Price Target Calculation" and "Key Risks to Price Target" sections at the end of this report, where applicable. March 30, 2012 TECHNOLOGY/SEMICONDUCTORS & COMPONENTS Rick Schafer 720-554-1119 [email protected]Shawn Simmons 212 667-8387 [email protected]Jason Rechel 312-360-5685 [email protected]Cloudy With A Chance Of ARM What the Microserver Market Means for Semiconductor Vendors SUMMARY The world is going mobile as an expanding base of both consumer and enterprise users connect from an increasing number of devices. Remotely accessing localized files is no longer an acceptable solution and next-generation data centers are being tasked with supporting the migration to the cloud. Further fueling this migration is an ongoing shift from pure compute to data access—moving from heavy computational workloads to millions of relatively smaller workloads. Servers must adapt. x86-based processors have long held a server monopoly, but this changing workload dynamic, compounded by the need for greater efficiencies, is opening the door for alternative processor architectures like ARM. While any material shake-up in the server CPU landscape remains unlikely before 2014, investors should be prepared for change. In this paper, we seek to examine what's behind microserver demand, define its advantages and growth prospects while identifying which semiconductor vendors are poised to benefit. KEY POINTS ■ A server workload identifies incoming work based on a set of user-defined connection attributes. Where the workload has historically sought to maximize how quickly data might be computed, it now seeks to maximize how quickly data can be accessed. It is a transition from power to speed. ■ The new workload dynamic is best exemplified by Web 2.0 companies where small, high-volume transactions drive business. These companies are beginning to design and build internal data centers and must quickly and efficiently scale capacity. We forecast Web 2.0 data center spending to grow at a 23% CAGR from 2011 to 2016, increasing from $6.3B to $17.8B. ■ A microserver is inherently less powerful relative to a traditional server and seeks to maximize operational and space efficiency. Today CPUs account for 1/3 of server/system BoM, but 2/3 of power usage. Microservers are able to handle "new" workloads with a less powerful CPU. More tantalizing to operators, our analysis demonstrates a 60-70% reduction in the cost of ownership. ■ We believe the microserver market, driven by the move to the cloud, will grow from <1% of the x86 server market today to 21% in 2016. Microserver CPU TAM will grow at an impressive 95% CAGR, reaching $4.5B over the same period. Microserver growth will also expand overall server CPU TAM by $1.5B in 2016 as CPU BoM jumps from 33% today to roughly 50% in 2016. ■ As the current market share leader, INTC stands the most to lose from the growth of microservers, though Atom and low-power Xeon will undoubtedly capture share. ARM vendors AMCC, NVDA and Calxeda, Tilera with its own architecture and AMD/SeaMicro are today's new breed and sit poised to benefit. EQUITY RESEARCH INDUSTRY UPDATE Oppenheimer & Co. Inc. 85 Broad Street, New York, NY 10004 Tel: 800-221-5588 Fax: 212-667-8229
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Oppenheimer & Co. Inc. does and seeks to do business with companies covered in its research reports. Asa result, investors should be aware that the firm may have a conflict of interest that could affect theobjectivity of this report. Investors should consider this report as only a single factor in making theirinvestment decision. See "Important Disclosures and Certifications" section at the end of this report forimportant disclosures, including potential conflicts of interest. See "Price Target Calculation" and "Key Risksto Price Target" sections at the end of this report, where applicable.
Cloudy With A Chance OfARMWhat the Microserver Market Means forSemiconductor Vendors
SUMMARY
The world is going mobile as an expanding base of both consumer and enterpriseusers connect from an increasing number of devices. Remotely accessing localizedfiles is no longer an acceptable solution and next-generation data centers are beingtasked with supporting the migration to the cloud. Further fueling this migration is anongoing shift from pure compute to data access—moving from heavy computationalworkloads to millions of relatively smaller workloads. Servers must adapt.x86-based processors have long held a server monopoly, but this changingworkload dynamic, compounded by the need for greater efficiencies, is opening thedoor for alternative processor architectures like ARM. While any material shake-upin the server CPU landscape remains unlikely before 2014, investors should beprepared for change. In this paper, we seek to examine what's behind microserverdemand, define its advantages and growth prospects while identifying whichsemiconductor vendors are poised to benefit.
KEY POINTS
■ A server workload identifies incoming work based on a set of user-definedconnection attributes. Where the workload has historically sought to maximizehow quickly data might be computed, it now seeks to maximize how quickly datacan be accessed. It is a transition from power to speed.
■ The new workload dynamic is best exemplified by Web 2.0 companies wheresmall, high-volume transactions drive business. These companies are beginningto design and build internal data centers and must quickly and efficiently scalecapacity. We forecast Web 2.0 data center spending to grow at a 23% CAGRfrom 2011 to 2016, increasing from $6.3B to $17.8B.
■ A microserver is inherently less powerful relative to a traditional server andseeks to maximize operational and space efficiency. Today CPUs account for1/3 of server/system BoM, but 2/3 of power usage. Microservers are able tohandle "new" workloads with a less powerful CPU. More tantalizing to operators,our analysis demonstrates a 60-70% reduction in the cost of ownership.
■ We believe the microserver market, driven by the move to the cloud, will growfrom <1% of the x86 server market today to 21% in 2016. Microserver CPUTAM will grow at an impressive 95% CAGR, reaching $4.5B over the sameperiod. Microserver growth will also expand overall server CPU TAM by $1.5B in2016 as CPU BoM jumps from 33% today to roughly 50% in 2016.
■ As the current market share leader, INTC stands the most to lose from thegrowth of microservers, though Atom and low-power Xeon will undoubtedlycapture share. ARM vendors AMCC, NVDA and Calxeda, Tilera with its ownarchitecture and AMD/SeaMicro are today's new breed and sit poised to benefit.
EQUITY RESEARCH
INDUSTRY UPDATE
Oppenheimer & Co. Inc. 85 Broad Street, New York, NY 10004 Tel: 800-221-5588 Fax: 212-667-8229
2
Cloudy With a Chance of ARM
The world is going mobile; a rapidly expanding base of both consumer and enterprise
users is increasingly accessing data and applications from a growing number of devices
across an expanding number of access points. Remotely accessing localized files or
applications is no longer an acceptable solution and next-generation data centers are
being tasked with supporting the transformational migration to the cloud. Compounding
the migration is an ongoing shift from single-thread, heavy workloads to millions of
relatively smaller computational workloads. Servers must adapt, and where x86 has long
had a foothold on the semiconductor server market, the changing workload dynamic and
rising importance on operational and space efficiency has begun to pave the way for
alternative processor and system architecture. We believe a fundamental shift in the
server market is yet in the top of the first inning. In this paper, we seek to examine the
growth and drivers of the microserver market, the advantages of more efficient
architecture and which semiconductor vendors may be poised to benefit.
The Intersection of the Cloud and the Web
It is widely known that the exponential growth of mobile devices, applications and data is
straining today’s network infrastructure. You may be reading on your desktop, notebook,
tablet, smartphone, or even on your connected-TV. Said devices may be plugged in via a
traditional home or business ethernet connection, connected to a public or private WiFi
network, MiFi network, mobile hotspot or to a 3G/4G wireless network. That these varying
devices and connections can access the same data and the same applications presents a
growing problem of complexity. And an expanding number of mobile devices (access
points) and booming number of applications/data only compounds upon the problem of
complexity that cloud computing attempts to ease.
Exhibit 1 – Growing Complexity of Connected Devices
Source: Cisco and Oppenheimer & Co.
Exhibit 1 portends a five-fold increase in the total possible combinations (complexity) of
access points and applications between 2010 and 2015. Consider that, according to The
Cisco Global Cloud Index, the percentage of global internet users using more than five
network connected devices will grow from 36% in 2010 to 69% by 2015; and those using
more than ten devices will quadruple from 6% in 2010 to 24% in 2015. Separately, the
Cisco Visual Networking Index forecasts that mobile cloud traffic will grow 28-fold from
2011-2016 and that cloud applications will account for 71% of global mobile data traffic in
2016. Simply put, the data and applications that we have all come to depend upon can no
longer be stored on a localized server and remotely accessed; the number and variety of
connected devices will demand that it all happens in the cloud.
The problem of complexity, however, expands further than simply moving enterprise
applications onto the cloud. The movement to the cloud will likely drive the demand for
traditional data centers. Where the web, and its millions of apps, maps, downloads and
streams per day intersects with the cloud is where the workload changes.
Information Technology / TECHNOLOGY
3
The Workload Has Changed…
As data, services and applications move toward the cloud, and as the world goes mobile,
Web 2.0 companies are increasingly driving traffic. A server workload identifies incoming
work based on a set of user-defined-connection attributes. Attributes are changing and the
type of server workload is transforming with the relative growth of these applications
because they inherently ask the server to perform different tasks. Large, single-thread
computational-heavy enterprise workloads of the past, while still important and still the
vast majority of workloads, have given way to millions of tiny, fractional, workloads. A
Google search, a social media status update or a high-frequency trade is each a
dramatically different task for the data center than a traditional enterprise-class workload.
Where the workload previously sought to maximize how fast a series of data could be
computed, the workload now seeks to maximize how quickly data can be accessed. It’s a
move from power to speed, and with always-on-always connected computing, the demand
for access to data is growing at the rate of the cloud.
…And What It Means for the Data Center
In plain English, the computational horsepower needed for a traditional enterprise
workload is akin to the necessity for an 18-wheeler to transport heavy military artillery from
New York to Los Angeles. The process will be time consuming and expensive in the form
of energy transportation cost. If one only needs to transport a case of 24 pint-sized
individual beers to the neighborhood barbeque, this same 18-wheeler is stodgy, difficult to
drive and highly energy inefficient. Why drive the 18-wheeler when the hybrid electric
sedan would suffice? Better yet, the fully electric sedan.
This is not a question of, or a problem that can be easily solved by virtualization.
Virtualization addresses chronic server underutilization by consolidating workloads onto
individual virtual machines that are hosted on a single physical server. Virtualization
reduces the size, complexity and administrative costs of the data center, but doesn’t
fundamentally address how the CPU handles workloads. Even in a virtualized and fully
utilized environment, the shoe still doesn’t fit for higher performing CPUs. For heavy
workloads, virtualized servers need high horsepower. But for hundreds or millions of
fractional workloads even in a fully utilized data center, a lower power CPU can and
should handle the task. As the most rudimentary element of the data center, the CPU
needs to become more efficient. Where it could not stray from the highest performing form
with workload constraints of the past, web and cloud workloads of today now permit a
move down the power train.
As personal applications and services hosted in the cloud converge with Web 2.0 data
centers, fractional workloads are becoming an increasingly large percentage of data
center demand. Because of the rapid growth of the mobile computing era, the data center
must adapt. There must be varying types of servers to meet the demand for varying types
of workloads, and more efficient servers must be able to scale-out capacity in addition to
today’s infrastructure. It’s not a question of upping compute power, but of scaling out
compute power. And as the market begins to demand low-horsepower, ultra-low power
servers that can accomplish exactly that, the door is opening for alternative architectures
and designs to challenge the traditional high-horsepower x86 incumbents.
If validation of the changing workload is required, look no further than AMD’s late-
February acquisition of SeaMicro. SeaMicro was to be termed the “architecturally
ambiguous wild card” in this paper and is, in our view, a landscape-changing acquisition
by AMD. As the first mover at the system level toward microservers, SeaMicro had
developed a proprietary architecture which the company claims reduces total system
power by 75% and can be processor agnostic (e.g., x86, ARM, etc.). By acquiring the
leading low-power system vendor that can reduce total power comparable to what a single
ARM SoC could achieve, AMD/SeaMicro has positioned itself well against both Atom and
Information Technology / TECHNOLOGY
4
ARM-based competitors. Let’s now look at how and why the data center will change to
meet the evolving workload demands and what this means for silicon vendors.
Defining the Term: Microserver
Intel officially coined the term microserver in 2009 before such a market really existed, and
though the market is still nascent today, Intel’s definition still stands. As we dissect this
market, it’s important to understand what exactly a microserver is. By Intel’s definition (and
by ours) a microserver is:
- Single socket;
- Lightweight, low power and low cost; and
- Part of a shared infrastructure environment whereby many small servers are
packaged into a larger ecosystem.
We will use the terms microserver, “many-core,” and “ultra-low power server”
interchangeably throughout this report, though there is a nuanced difference between
“many-core” and a microserver. The concept is the same and has always hinged upon
maximizing space and operational efficiency while minimizing capital intensity. However,
within the microserver market, not all CPUs are created equally and not all will have the
ability to put many-cores onto one chip in a cache coherent manner. Many core servers
(loosely defined as a processor with 64-or-more cores) can serve to effectively reduce
software porting costs, something that is likely to be of primary concern when first
evaluating a microserver based on alternative architecture.
What has kept the market nascent to this point, namely software, architectural inhibitors
and a lack of motivation to innovate from the incumbents, has begun to change. We
believe the microserver market is yet in the top of the first inning of growth and is poised to
become a meaningful portion of the market over the next 3-5 years.
The Economics: Money and Sense
As the workload changes, the demand pull is happening today. But throughout the supply
chain, one message is clear: systems vendors and end users will not make the switch to
an alternative architecture unless the switch presents a value that is magnitudes better.
Porting software on top of existing infrastructure is no small task and the inherent value of
a more efficient CPU must be worth the switching costs, man-hours, and gamble on a new
architecture—we think the message is clear. The supply side of the equation has not
made the economics work…until now.
The analysis in Exhibit 2 examines the Total Cost of Ownership (TCO) of a traditional
server today and a microserver. We have simplified things into round numbers and used a
standard server today and a custom-built microserver based on today’s announced specs.
It is important to note that one many-core processor can replace a standard x86 cluster
with many smaller and more efficient cores, thereby maximizing efficiency at the CPU
level. Or that several highly efficient single socket servers can replace one single multi-
socket server.
We have best attempted to build a microserver that would have enough performance to
scale-out a typical Web 2.0 workload. It’s not a question of whether the horsepower of a
microserver(s) matches the horsepower of a traditional server. It doesn’t, and it won’t, and
that’s the point; it doesn’t need to be as powerful. Only just powerful enough to scale out
simple workloads. And the TCO analysis is meant to compare the merits of a microserver
relative to a traditional server, not any one particular architecture against another. We
used four years as our useful life. Because we’ve held acquisition costs at a constant
Information Technology / TECHNOLOGY
5
level, the argument therefore shouldn’t be made that depreciation would alter the cash
flows of our analysis.
Exhibit 2 – Cost of Ownership Analysis
Source: Tilera, Oppenheimer & Co.
One factor that we did not include in our initial analysis: software switching costs. Software
porting costs can take substantial man-hours and we believe the initial costs can be
similar to the total amount spent on new servers. We will later see that Web 2.0 data
centers are where microservers will play. Where traditional enterprises may be hesitant to
begin incrementally adopting microservers because of very high software porting costs,
this is not the case with Web 2.0 vendors. These companies, in many cases, use either
internally developed software or public open-source software, meaning that there is a
much smaller barrier to entry and it is much easier for new architectures to make dramatic
savings on cost. The lower the software costs, the greater the opportunity for new
architectures to gain initial traction. All this said, software porting costs are highly variable,
and may range from zero in some cases to millions of dollars in others. Software will differ
on a per-case basis, but we have standardized an average cost for this analysis.
As the volume (or in this case total value purchased) of microservers grows, these porting
costs (large or small) will not increase linearly and we believe the initial total cost of a
similarly-equipped microserver would decline from ~2x a standard server to closer to
~1.5x over time. But as Exhibit 3 below demonstrates, even including substantial software
switching costs, the microserver TCO is 63% less than a traditional server.
Green: Highly Competitive, Yellow: Moderately Competitive, Red: Not Competitive
Information Technology / TECHNOLOGY
18
New Kids on the Block
Semis
Calxeda – Calxeda was founded in 2008 and two years later received a hefty investment
from both ARM and a mix of VC firms. With a vested interest from ARM, Calxeda has thus
far been the pioneer in the ARM-based server market and, we believe, will enter volume
production with multiple OEMs in 2H12. HP’s “Project Moonshot” is just the beginning, in
our opinion, and we see the company remaining successful as it makes the transition from
32-bit to 64-bit ARM. While the market is somewhat resistant to adopting 32-bit ARM
solutions today, as the first mover in the ARM camp, we believe that Calxeda will be
successful leveraging 32-bit into a 64-bit CPU.
Source: Calxeda
The Calxeda SoC has been termed EnergyCore, designed to dramatically cut space and
power requirements in hyper-scale computing environments. EnergyCore includes the
processor complex, with multiple quad-core ARM processors, L2 cache and integrated
memory and I/O controllers. The on-chip fabric switch and management engine are each
optimized for many-core server clusters. That the company currently has success with 32-
bit is a testament to its design and integration capabilities, and a testament to the market’s
desire for alternative architecture.
EnergyCore is capable of cutting total system power and space by 90% compared with
today’s systems and can scale to thousands of cores. The ECX-1000 Series is capable of
driving total system power below 5W. This is not an apples to apples comparison with the
sub-10W Atom chip which doesn’t enter the same ballpark when taking the total system
power into account. We believe Calxeda will continue to mount design wins and will be a
niche player in the microserver market long term. The company has publicly stated its
intent and desire to continue to succeed as a standalone entity.
Tilera – Founded in 2004, Tilera is a small private company specializing in multicore
embedded processors. Tilera is benefiting from the explosion in data traffic, which is
creating bottlenecks in carrier networks. The company made its first inroads into the
security and networking market, in addition to the multimedia market. In 1Q11, Tilera
started shipping its TILE64Pro into Quanta’s S2Q server, which we believe has been
adopted by multiple top 20 Web 2.0 companies. With its first mover advantage to 64-bit
processors outside of the x86 incumbents, we believe the company remains well
positioned to continue to take share in the cloud computing server market in the medium
term with its latest generation of multicore processors (TILE-GX). The key differentiator
and value proposition of Tilera is its proprietary architecture and own iMesh on-chip
network interconnect.
Tilera unveiled its first product in 2007, the TILE64. At a total power dissipation of 20W,
the 64-core processor is able to target an array of high-speed embedded applications. A
year later in 2008, the company released its second generation of multicore processors,
the TILE64Pro. Within this generation of processors, the company released both 36- and
Information Technology / TECHNOLOGY
19
64-core versions. Management claimed this line of processors achieves 1.5x to 2.5x
performance boost over the previous generation, primarily through more efficient caching.
In October 2009, Tilera revealed its third generation of multicore processors, the TILE-GX.
The new product line includes a range of multicore processors, spanning from 16 to 100
cores. Manufactured on TSMC’s 40nm technology process, management claims power
consumption is going to range from 10W to 55W, depending on the number of cores.
Performance for this line of processors achieves roughly a 1.5x to 2.1x boost over the
previous generation. In addition to its core networking and security end-markets, we
believe the new line of multicore processes expands Tilera’s addressable market into
cloud computing servers. We believe Tilera has tallied 20+ design wins and is engaged
with over 80 system vendors for the GX. The company is expected to ramp into pre-
production revenues with its 16- and 36-core versions this quarter in primarily networking
and multimedia applications.
Source: Tilera
We anticipate server-based revenues to come in 2H12. The company’s primary
advantage over ARM-based competitors is its many-core technology and the ability to
scale many cores onto a single chip with cache coherency. This takes the wimpy core
debate largely out of the equation. We see Tilera playing primarily in existing data center
with open-sourced software where the company can evade software compatibility issues.
The trend toward ODM production (the first of which is based on Tilera silicon) will also
positively impact its market opportunity.
System Vendors
SeaMicro (Now a part of AMD) – SeaMicro was founded on the premise that one-size no
longer fits all in the server market. At the system level, SeaMicro set out to become the
pioneer of microservers—and succeeded. In late February, the company agreed to be
purchased by AMD, a move we have applauded.
Source: SeaMicro
SeaMicro had four generations of servers built around 32-bit Atom, two around 64-bit
Atom and a just-announced Xeon partnership (along with Samsung). SeaMicro reduces
Information Technology / TECHNOLOGY
20
total system power by 75% by eliminating “un-needed” components and consolidating the
rest into a custom ASIC. This custom ASIC is then thrown onto a credit-card sized
motherboard with the CPU and DRAM and linked with hundreds of other motherboards by
an ultra-efficient fabric. Where the CPU is generally two-thirds of the total power draw
within a system, SeaMicro has effectively reduced total system power by 75% (or greater
than the CPU draw) by eliminating the unnecessary components and making those
remaining inherently more efficient. Optimizing utilization is key to this part of the equation.
By being architecturally agnostic, SeaMicro could have effectively succeeded in the
microserver market without tying itself to a single CPU vendor. And by reducing total
system power by 75% even using x86 cores, the company is highly competitive on the
power and efficiency front with the expected specs from ARM SoCs. Further, and perhaps
even its greatest advantage, is that SeaMicro servers are “plug-and-play,” meaning they
need no changes to software operating systems or applications. Now in bed with AMD, we
believe AMD/SeaMicro will continue to take share of the microserver system market in the
near term.
Huawei – In ramping an engineering design team for the task, we believe Huawei is
working on a microserver via its very secretive “Project Borg.” A traditional Chinese
networking giant, Huawei has begun to expand its footprint, notably into smartphones.
Without any further knowledge of the company’s server initiatives, we believe Huawei
could immediately and successfully leverage its existing customer/channel relationships
and balance sheet to carve out a chunk of the system-level microserver market.
The Coming ARM Challengers
There are many licensees of 32-bit ARM today and Calxeda is the only vendor that plays
in the server market. The advantages of 64-bit arrive at both the hardware and software
level. 64-bit is the next step in performance that can pull ARM vendors within the requisite
performance standards to make real economic sense. The equation becomes
substantially more economical as it is exponentially easier to port software from 64-bit to
64-bit rather than 64-bit to 32-bit. As we’ve said, ARM can begin to establish a meaningful
presence in the server market only with the arrival of 64-bit CPUs.
At date of publication, there are just three officially announced licensees of ARM 64-bit
technology: AppliedMicro, Microsoft and NVIDIA. We believe that AAPL and QCOM have
also licensed the technology and that several additional ARM vendors could announce 64-
bit licenses in the second-half of 2012 with the intention of developing a CPU for the
server market. These licensees most notably include Marvell and Samsung.
AppliedMicro (AMCC) – AMCC is the only company to date that has demonstrated a 64-
bit ARM CPU (as an FPGA). The company initially started working with ARM in 2009, and
as a result of the collaboration, ARM announced V-8, the first 64-bit ARM architecture in
October 2011. In conjunction with the announcement, AMCC launched (and
demonstrated) its 64-bit ARM CPU, codenamed X-Gene. X-Gene is a 3GHz CPU
designed to be scalable up to 128 cores. Utilizing its past expertise in high-speed
connectivity, AMCC is integrating PCIe, 10/40/100G I/O, and storage on board the SoC.
The company started sampling an FPGA version in 1H12 with several customers and is
expected to have first standard CMOS silicon (40nm at TSMC) by early 2013.
We believe AMCC is already working on a move to 28nm. AMCC has spent roughly $50M
developing X-Gene to date and could spend an incremental $100M before first revenue
from X-Gene in early 2014. Assuming a timely release, we expect AMCC to be the first
company to ramp into production with a 64-bit ARM server CPU starting in early 2014.
With an announced 6-12 month lead over other would-be 64-bit ARM competitors, we
believe AMCC could capture the lion’s share of initial design wins. How this share trends
Information Technology / TECHNOLOGY
21
over time depends upon the yet-to-be-announced products from larger competitors that
remain the wild-cards of the ARM camp.
Marvell Technology (MRVL) – Marvell’s industry leading ARMADA family of application
processors and cellular SoCs have provided it an established presence in the wireless
communications market. And as the market share leader in storage-based controllers,
MRVL is well positioned to leverage its world-class design expertise into a 64-bit CPU for
the microserver market. And with the balance sheet of a much-larger company, MRVL is
not a hard sell in the OEM/qualification process. We expect MRVL to announce its 64-bit
license from ARM later this year and to unveil plans for its server CPU within the next 12-
18 months.
NVIDIA (NVDA) – Nvidia’s Tegra family of application processors has built an established
presence in the mobile computing market, including smartphones and tablets. Alongside
TXN and QCOM, we believe NVDA will play in the first Windows-on-ARM notebooks later
this year. As one of the few companies with an announced 64-bit ARM license, we believe
NVDA could look to utilize its market-leading graphics capability to develop a CPU
targeted at the microserver and HPC markets.
NVDA announced Project Denver at CES in 2011. Project Denver is an initiative by NVDA
to fully integrate an ARM-based CPU and GPU on the same chip. While the company has
not given a timeline on product launches, we believe NVDA could announce products
within the next 12 months. By leveraging its pristine balance sheet, design expertise and
existing server OEM relationships, we believe NVDA could emerge as the initial leader in
the 64-bit ARM camp alongside AMCC. Longer term, we believe NVDA could pose as one
of the larger threats to INTC’s dominant market share in the server space.
Qualcomm (QCOM) – Qualcomm is the dominant global supplier of mobile chipsets
featured in today’s smartphones and tablets. Alongside NVDA and TXN, QCOM will play
in the first Windows-on-ARM notebooks later this year. With a 64-bit ARM license (which
we believe it unofficially has today), QCOM would seek to widen its advantage in the
mobile market and further penetrate traditional notebooks. Further, we believe the
company could develop a CPU for the microserver market and seek to enable end-to-end
solutions to capitalize on the mobile computing revolution. By leveraging its balance sheet,
current relationship with ARM, design expertise and dominant presence in the mobile
handset market, we believe that QCOM could emerge as a threat in the low-power data
center market.
Samsung – Tech bellwether Samsung is in no sense of the meaning a “new kid on the
block,” but it would be a new entrant to the server CPU market. We believe Samsung is
developing a new, ultra-low power CPU that would play in microservers and is on a short
list for an ARM 64-bit license. As an 800 lb. gorilla in countless markets across the
electronics food chain, we believe Samsung could immediately flex its muscle in the
microserver market. Above all else, Samsung has one very clear advantage; DRAM. The
SeaMicro system, for example, consolidates all server components into CPU, internal
ASIC and DRAM. An ARM SoC would also sit directly alongside DRAM within a server. As
the overwhelming market share leader in the DRAM market, we believe Samsung could
muscle its way into the CPU market by even further easing memory bandwidth
constraints.
Information Technology / TECHNOLOGY
22
Stock prices of other companies mentioned in this report (as of 3/28/12):
ARM Holdings Plc (ARMH-NASDAQ, $28.42, Not Rated)
Dell Inc. (DELL-NASDAQ, $16.52, Not Rated)
LinkedIn Corp. (LNKD-NASDAQ, $102.08, Not Rated)
Groupon (GRPN-NASDAQ, $17.80, Not Rated)
Hewlett Packard Co. (HPQ-NYSE, $23.58, Not Rated)
Hitachi (HIT-NYSE, $64.67, Not Rated)
Pandora Media Inc. (P-NYSE, $10.17, Not Rated)
Super Micro Computer (SMCI-NASDAQ, $17.30, Not Rated)
Quanta Computer (2382.TW, 72.30 TWD, Not Rated)
Wistron (3231.TW, 44.50 TWD, Not Rated)
Samsung Electronics Co. (005930.KS, 1,280,000.00 KRW, Not Rated)
Zynga Inc. (ZNGA-NASDAQ, $12.66, Not Rated)
Information Technology / TECHNOLOGY
23
Important Disclosures and CertificationsAnalyst Certification - The author certifies that this research report accurately states his/her personal views about the
subject securities, which are reflected in the ratings as well as in the substance of this report.The author certifies that no
part of his/her compensation was, is, or will be directly or indirectly related to the specific recommendations or views
contained in this research report.
Potential Conflicts of Interest:
Equity research analysts employed by Oppenheimer & Co. Inc. are compensated from revenues generated by the firm
including the Oppenheimer & Co. Inc. Investment Banking Department. Research analysts do not receive compensation
based upon revenues from specific investment banking transactions. Oppenheimer & Co. Inc. generally prohibits any
research analyst and any member of his or her household from executing trades in the securities of a company that such
research analyst covers. Additionally, Oppenheimer & Co. Inc. generally prohibits any research analyst from serving as an
officer, director or advisory board member of a company that such analyst covers. In addition to 1% ownership positions in
covered companies that are required to be specifically disclosed in this report, Oppenheimer & Co. Inc. may have a long
position of less than 1% or a short position or deal as principal in the securities discussed herein, related securities or in
options, futures or other derivative instruments based thereon. Recipients of this report are advised that any or all of the
foregoing arrangements, as well as more specific disclosures set forth below, may at times give rise to potential conflicts of
interest.
Important Disclosure Footnotes for Companies Mentioned in this Report that Are Covered byOppenheimer & Co. Inc:
Other DisclosuresThis report is issued and approved for distribution by Oppenheimer & Co. Inc. Oppenheimer & Co. Inc transacts Business on all Principal
Exchanges and Member SIPC. This report is provided, for informational purposes only, to institutional and retail investor clients of
Oppenheimer & Co. Inc. and does not constitute an offer or solicitation to buy or sell any securities discussed herein in any jurisdiction
where such offer or solicitation would be prohibited. The securities mentioned in this report may not be suitable for all types of investors.
This report does not take into account the investment objectives, financial situation or specific needs of any particular client of
Oppenheimer & Co. Inc. Recipients should consider this report as only a single factor in making an investment decision and should not
rely solely on investment recommendations contained herein, if any, as a substitution for the exercise of independent judgment of the
merits and risks of investments. The analyst writing the report is not a person or company with actual, implied or apparent authority to
act on behalf of any issuer mentioned in the report. Before making an investment decision with respect to any security recommended in
this report, the recipient should consider whether such recommendation is appropriate given the recipient's particular investment needs,
objectives and financial circumstances. We recommend that investors independently evaluate particular investments and strategies, and
encourage investors to seek the advice of a financial advisor.Oppenheimer & Co. Inc. will not treat non-client recipients as its clients
solely by virtue of their receiving this report.Past performance is not a guarantee of future results, and no representation or warranty,
express or implied, is made regarding future performance of any security mentioned in this report. The price of the securities mentioned
in this report and the income they produce may fluctuate and/or be adversely affected by exchange rates, and investors may realize
losses on investments in such securities, including the loss of investment principal. Oppenheimer & Co. Inc. accepts no liability for any
loss arising from the use of information contained in this report, except to the extent that liability may arise under specific statutes or
regulations applicable to Oppenheimer & Co. Inc.All information, opinions and statistical data contained in this report were obtained or
derived from public sources believed to be reliable, but Oppenheimer & Co. Inc. does not represent that any such information, opinion or
statistical data is accurate or complete (with the exception of information contained in the Important Disclosures section of this report
provided by Oppenheimer & Co. Inc. or individual research analysts), and they should not be relied upon as such. All estimates, opinions
and recommendations expressed herein constitute judgments as of the date of this report and are subject to change without
notice.Nothing in this report constitutes legal, accounting or tax advice. Since the levels and bases of taxation can change, any reference
in this report to the impact of taxation should not be construed as offering tax advice on the tax consequences of investments. As with
any investment having potential tax implications, clients should consult with their own independent tax adviser.This report may provide
addresses of, or contain hyperlinks to, Internet web sites. Oppenheimer & Co. Inc. has not reviewed the linked Internet web site of any
third party and takes no responsibility for the contents thereof. Each such address or hyperlink is provided solely for the recipient's
convenience and information, and the content of linked third party web sites is not in any way incorporated into this document.
Recipients who choose to access such third-party web sites or follow such hyperlinks do so at their own risk.
This report or any portion hereof may not be reprinted, sold, or redistributed without the written consent of Oppenheimer & Co. Inc.