CHAPTER 1 INTRODUCTION TO MICROCOMPUTERS This paper identifies
the current state-of-the-art microcomputer system design. It begins
by giving insights into the introduction, evolution and trends in
the microcomputer design in order to get a proper overall
understanding. The various system factors of architectural design,
process technology, increasing system complexity, and pricing are
considered. A microcomputer is a digital computer whose central
processing unit consists of a microprocessor, a single
semiconductor integrated circuit chip. Microcomputers are the
driving technology behind the growth of personal computers and
workstations. The capabilities of today's microprocessors in
combination with reduced power consumption have created a new
category of microcomputers: hand-held devices. Some of these
devices are actually general-purpose microcomputers: They have a
liquid-crystal-display (LCD) screen and use an operating system
that runs several general-purpose applications. Many others serve a
fixed purpose, such as telephones that provide a display for
receiving text-based pager messages and automobile navigation
systems that use satellite-positioning signals to plot the
vehicle's position. The microprocessor acts as the microcomputer's
central processing unit (CPU), performing all the operations
necessary to execute a program. The various subsystems are
controlled by the central processing unit. Some designs combine the
memory bus and bus input/output into a single system bus. The
graphics subsystem may contain optional graphics acceleration
hardware. A memory subsystem uses semiconductor random-access
memory (RAM) for the temporary storage of data or programs. The
memory subsystem may also have a small secondary memory cache that
improves the system's performance by storing frequently used data
objects or sections of program code in special high-speed RAM. The
graphics subsystem consists of hardware that displays information
on a color monitor or LCD screen: a graphics memory buffer stores
the images shown on the screen, digital-to-analog convertors (DACs)
generate the signals to create an image on an analog monitor, and
possibly special hardware accelerates the drawing of two- or
three-dimensional graphics. (Since LCD screens are digital devices,
the graphics subsystem sends data to the screen directly rather
than through the DACs.). The storage subsystem uses an internal
hard drive or removable media for the persistent storage of data.
The communications subsystem consists of a high-speed modem or the
electronics necessary to connect the computer to a network.
Microcomputer software is the logic that makes microcomputers
useful. Software consists of programs, which are sets of
instructions that direct the microcomputer through a sequence of
tasks. A startup program in the microcomputer's ROM initializes all
of the devices, loads the operating system software, and starts it.
All microcomputers use an operating system that provides basic
services such as input, simple file operations, and the starting or
termination of programs. While the operating system used to be one
of the major distinctions between personal computers and
workstations, today's personal computer operating systems also
offer advanced services such as multitasking, networking, and
virtual memory. All microcomputers exploit the use of bit-mapped
graphics displays to support windowing operating systems.CHAPTER 2
FASTER, SMALLER, CHEAPER: EVOLUTION OF MICROCOMPUTERS A
microcomputer is a computer built around a single-chip
microprocessor. Less powerful than minicomputers and mainframes,
microcomputers have nevertheless evolved into very powerful
machines capable of complex tasks. Technology has progressed so
quickly that state-of-the-art microcomputers are as powerful as
mainframe computers of only a few years ago, at a fraction of the
cost. Amajorsteptowardthe modern microcomputers (PC as it may be
called today) came in the 1960s when a group of researchers at the
Stanford Research Institute (SRI) in California began to explore
ways for people to interact more easily with computers. The SRI
team developed the first computer mouse and other innovations that
would be refined and improved in the 1970s by researchers at the
Xerox PARC (Palo Alto Research Center, Inc). The PARC team
developed an experimental PC design in 1973 called Alto, which was
the first computer to have a graphical user interface (GUI).
Twocrucialhardwaredevelopments would help make the SRI vision of
computers practical. The miniaturization of electronic circuitry as
microelectronics and the invention of integrated circuits and
microprocessors enabled computer makers to combine the essential
elements of a computer onto tiny silicon computer chips, thereby
increasing computer performance and decreasing cost.
BecauseaCPUcalculates, performs logical operations, contains
operating instructions, and manages data flows, the potential
existed for developing a separate system that could function as a
complete microcomputer. The first such desktop-size system
specifically designed for personal use appeared in 1974; it was
offered by Micro Instrumentation Telemetry Systems (MITS). The
owners of the system were then encouraged by the editor of Popular
Electronics magazine to create and sell a mail-order computer kit
through the magazine. TheAltair 8800isconsidered to be the first
commercial PC. The Altair was built from a kit and programmed by
using switches. Information from the computer was displayed by
light-emitting diodes on the front panel of the machine. The Altair
appeared on the cover of Popular Electronics magazine in January
1975 and inspired many computer enthusiasts who would later
establish companies to produce computer hardware and software.
Thedemandforthemicrocomputer kit was immediate, unexpected, and
totally overwhelming. Scores of small entrepreneurial companies
responded to this demand by producing computers for the new market.
The first major electronics firm to manufacture and sell personal
computers, Tandy Corporation (Radio Shack), introduced its model in
1977. It quickly dominated the field, because of the combination of
two attractive features: a keyboard and a display terminal using a
cathode-ray tube (CRT). It was also popular because it could be
programmed and the user was able to store information by means of
cassette tape. Americancomputerdesigners Steven Jobs and Stephen
Wozniak created the Apple II in 1977. The Apple II was one of the
first PCs to incorporate a color video display and a keyboard that
made the computer easy to use. Jobs and Wozniak incorporated Apple
Computer Inc. the same year. Some of the new features they
introduced into their own microcomputers were expanded memory,
inexpensive disk-drive programs and data storage, and color
graphics. Apple Computer went on to become the fastest-growing
company in U.S. business history. Its rapid growth inspired a large
number of similar microcomputer manufacturers to enter the field.
Before the end of the decade, the market for personal computers had
become clearly defined.In1981IBMintroduced its own microcomputer
model, the IBM PC. Although it did not make use of the most recent
computer technology, the IBM PC was a milestone in this burgeoning
field. It proved that the PC industry was more than a current fad,
and that the PC was in fact a necessary tool for the business
community. The PCs use of a 16-bit microprocessor initiated the
development of faster and more powerful microcomputers, and its use
of an operating system that was available to all other computer
makers led to what was effectively a standardization of the
industry. The design of the IBM PC and its clones soon became the
PC standard, and an operating system developed by Microsoft
Corporation became the dominant software running PCs. Agraphical
user interface (GUI)a visually appealing way to represent computer
commands and data on the screenwas first developed in 1983 when
Apple introduced the Lisa, but the new user interface did not gain
widespread notice until 1984 with the introduction of the Apple
Macintosh. The Macintosh GUI combined icons (pictures that
represent files or programs) with windows (boxes that each contain
an open file or program). A pointing device known as a mouse
controlled information on the screen. Inspired by earlier work of
computer scientists at Xerox Corporation, the Macintosh user
interface made computers easy and fun to use and eliminated the
need to type in complex commands.Beginningintheearly 1970s,
computing power doubled about every 18 months due to the creation
of faster microprocessors, the incorporation of multiple
microprocessor designs, and the development of new storage
technologies. A powerful 32-bit computer capable of running
advanced multiuser operating systems at high speeds appeared in the
mid-1980s. This type of PC blurred the distinction between
microcomputers and minicomputers, placing enough computing power on
an office desktop to serve all small businesses and most
medium-size businesses. Duringthe1990stheprice of personal
computers came down at the same time that computer chips became
more powerful. The most important innovations, however, occurred
with the PC operating system software. Apples Macintosh computer
had been the first to provide a graphical user interface, but the
computers remained relatively expensive. Microsoft Corporations
Windows software came preinstalled on IBM PCs and clones, which
were generally less expensive than Macintosh. Microsoft also
designed its software to allow individual computers to easily
communicate and share files through networks in an office
environment. The introduction of the Windows operating systems,
which had GUI systems similar to Apples, helped make Microsoft the
dominant provider of PC software for business and home use.
PCsintheformofportable notebook computers also emerged in the
1990s. These PCs could be carried in a briefcase or backpack and
could be powered with a battery or plugged in. The first portable
computers had been introduced at the end of the 1980s. The true
laptop computers came in the early 1990s with Apples Powerbook and
IBMs ThinkPad. PCscontinuetoimprove in power and versatility. The
growing use of 64-bit processors and higher-speed chips in PCs in
combination with broadband access to the Internet greatly enhances
media such as motion pictures and video, as well as games and
interactive features. The increasing use of computers to view and
access media may be a further step toward the merger of television
and computer technology that has been predicted by some experts
since the 1990s. In 2012, the Raspberry Pi credit-card-sized
single-board computer was launched, directly inspired by Acorn's
BBC Micro of 1981, and including support for BBC BASIC. A second
revolution in computing is beginning with Raspberry Pi.
CHAPTER 3TRENDS IN MICROCOMPUTER SYSTEM DESIGN Early
microcomputer designs placed most of their emphasis on CPU
considerations and gave input/output considerations a much lower
priority. Early device designers were not systems oriented and, as
a result, applied most of their efforts to developing a CPU along
the lines of a minicomputer type of architecture. Fortunately,
these designs were general enough so that they could be used in
many applications, but they were certainly not an optimum design
for the types of jobs people want to do with microcomputers. As
these first microcomputers were applied to specific products, the
importance to I/O capability became apparent because of the nature
of the problems being solved. Data was being sensed and input,
limited computations were being made, and the transformed data was
being output. In most cases, the application was a real time
control function with many inputs and outputs such as in a printer
controller or a cash register. We soon learned a fact that the big
computer systems people had known all along. This was that the CPU
is just a small, albeit an important, element of an overall
computer system; furthermore, the I/O subsystems may be higher in
dollar content than the mainframe and can affect system throughput
just as much, if not more, than the CPU. As a result, a series of
programmable parallel and serial I/O devices were soon developed by
some major microcomputer vendors. These programmable I/O devices
could be configured for various input/output configurations by
means of control words loaded by the CPU; they also performed all
data transfer handshaking protocol. Some vendors, Rockwell being
the leader in this particular area, went a step further and
developed peripheral controllers such as keyboard/display
controllers and various low speed printer controllers which were
functionally independent of the CPU. The CPU controls the transfer
of data and status information to and from these peripheral
controllers; the controllers then perform all detailed peripheral
control functions independent of the CPU. This permits the CPU to
operate in a system executive mode; the tasks were set up and
monitored by the CPU, but the CPU was free to execute other tasks
while the detailed task was being executed by the peripheral
controller device. Thus, the use of distributed processing design
techniques in microcomputer systems began and offered substantial
benefits for the user over the CPU oriented approach. First, the
MOS/LSI peripheral controller represented a significant cost
advantage over peripheral controllers implemented with discrete
logic. The same benefits of MOS/LSI derived in CPU implementations
also apply to peripheral controllers, i.e., low cost, low power,
and functional size reductions. Second, the intelligent peripheral
controller overcame the inherent disadvantage of MOS/LSI, that of
lower speed operation compared to equivalent discrete logic
implementations, by providing parallelism in executing system
functions. Thirdly, the use of intelligent peripheral and I/O
controllers significantly reduced the software complexity of real
time operations. The simultaneous real time control of several
system functions by the CPU can result in very complex software and
greatly complicate the addition of future system functions. In CPU
oriented systems, a law of nature working against you is that as
the percentage utilization factor of the CPU gets above 60 to 70
percent, the software complexity for real time functions tends to
go up in some exponential manner. In microcomputers, it is
extremely important to minimize software complexity since a large
majority of people writing software for them are converted logic
designers, not professional programmers. The software required to
support an intelligent MOS/LSI printer controller, for example,
consists of a simple 5 to 6 instruction loop transferring data to
be printed to that device. In these cases, we are essentially using
dedicated MOS/LSI logic to replace complex software. The next
logical step beyond intelligent controllers in the distributed
processing trend in microcomputers is the use of multiple
microprocessors in the same system. In this approach, multiple
CPU's are used to perform various sub assignments in the overall
system design. Much effort has been expended by computer system
designers to solve the general multiprocessor problems of
programming and real time task allocation; this is, of course, a
very difficult problem which requires much additional effort. In
the microcomputer applications, however, the system is designed and
dedicated to a fixed set of tasks. The specific, dedicated tasks
are easily assigned and coded independently, to a large degree; a
solution to the generalized problem is not required in these
specific cases. The multi-processor approach offers a significant
increase in performance over a single microprocessor, and
additionally simplifies overall software requirements since the
multiple CPU's are not loaded nearly as heavily as a single
CPUwould be in performing the total job. Another aspect of
distributing intelligence throughout the microcomputer system is
the integration of interrupt and DMAhandling techniques into the
CPU and various I/O and peripheral controller devices. Again,
methods of dispersing the workload outside of the CPU help prevent
CPU bottlenecks and their attendant problems. In this philosophy,
interrupts become self-identifying requiring no polling and
interrupt requests may be generated by any device and each device
has its own built-in method of prioritization. In this concept, DMA
requests and terminations can also be generated by I/O and
peripheral devices.Because of the low cost attributes of MOS/LSI,
the use of DMA techniques.should be carefully re-evaluated.
Previously, DMA has been associated only with high speed data
transfers; it has been an expensive option in minicomputers which
has limited its philosophy of usage. Now, in a microcomputer, it is
so inexpensive that it canbe considered simply as a means of
reducing system overhead and software complexity. A sizeable
simplification of system software and reduction in CPU overhead can
be achieved even in very slow data transfer situations. Instruction
sets on initial microcomputers were pretty much a scaled down,
basic copy of classical minicomputer instruction sets. As device
functional densities increased, instruction setsgrew under the
influence of two forces. The first force is to add additional
traditional minicomputer instructions; the second force is to add
instructions which more uniquely address specific requirements for
microcomputer types of applications. Typical of theseare bit
manipulation instructions, block move instructions, and additional
addressing modes for all instructions. Bit manipulation
instructions become very important in microcomputers because of the
heavy I/O control aspects of most applications. The useof memory
mapped I/O ports with bit set, reset, and bit test instructions
provide a new advance in bit oriented I/O control applications.
Similarly, bit rotation instructions can beutilized to effectively
scan multiple output lines. One trend in instruction sets is to
remove the instruction "action" from the CPU registers to memory
locations or I/O locations, at least from the programmers point of
view. These macro type instructions represent a first step in
utilizing more complex control logic to obtain, in some degree, the
effect of a higher level programming language. The major challenge
of system architects is to judiciously combine and integrate
hardware and software techniques to reduce by an order of magnitude
the initial effort required to program a microcomputer
application.3.1 MICROPROCESSOR DESIGN The CPU is a very important
element of an overall computer system. A processor is at the heart
of every computer system that we build today. Around this
processor, you find several other components that make up a
computer. Memory for instruction and data storage and input-output
devices to communicate with the rest of the world, like disk
controllers, graphics cards, keyboard interfaces, network adapters,
etc. The purpose of the processor is to execute machine
instructions. Thus, the logical operation of a processor is defined
by the instruction set architecture (ISA) that it executes.
Multiple different processors can implement the same ISA. What
differentiates such processors is their processor architecture,
which is the way that each processor is organized internally in
order to achieve the goal of implementing its ISA. In the past
thirty years, computer technology advances have fundamentally
changed the practice of business and personal computing. During
these three decades, the wide acceptance of personal computers and
the explosive growth in the performance, capability, and
reliability of computers have fostered a new era of computing. The
driving forces behind this new computing revolution are due
primarily to rapid advances in computer architecture and
semiconductor technologies. This section will examine key
architectural and process technology trends that affect
microprocessor designs. Microprocessor Evolution In 1965 Gordon
Moore observed that the total number of devices on a chip doubled
every 12 months at no additional cost. He predicted that the trend
would continuein the 1970s but would slow after 1975. Known widely
as the Moores Law, these observations made the case for continued
wafer and die size growth, defect densityreduction, and increased
transistor density as technology scaled and manufacturing matured.
Transistor count in leading microprocessors has doubled in each
technology node, appropriately every 18 to 24 months. Factors that
drove up transistor count areincreasingly complex processing cores,
integration of multiple levels of caches, and inclusion of system
functions. Microprocessors frequency has doubled in each
generation, results of 25% reduction of gates per clock, faster
transistors and advanced circuit design. Die size has increased at
7% per year while feature size reduced by 30% every 2 to 3 years.
Together, these fuel the transistor density growth as predicted by
Moores Law. Die size is limited by the reticle size, power
dissipation, and yield. Leading microprocessors typically have
large die sizes that are reduced with more advanced process
technology toimprove frequency and yield. As feature size gets
smaller, longer pipelines enable frequency scaling, this has been a
key driver for performance.
Transistor Scaling Device physics poses several challenges to
future scaling of the bulk MOSFET structure. Leakage through the
gate oxide by direct band-to-band tunneling limitsphysical oxide
thickness scaling and will drive high-k gate material adoption.
Sub-threshold leakage current will continue to increase.
Researchers have demonstratedexperimental devices with a gate
length of only 15nm,which will enable chips with more than one
billion transistors by the second half of this decade. While bulk
CMOS transistor scaling is expected to continue, novel transistor
structures are being explored.
Interconnect Scaling As advances in lithography decrease feature
size and transistor delay, on-chip interconnect increasingly
becomes the bottleneck in microprocessor designs.Narrower metal
lines and spacing resulting from process scaling increase
interconnect delay. Local interconnects scale proportionally to
feature size. Global interconnects, primarily dominated by RC
delay, are not only insufficient to keep up but are rapidly
worsening. Repeaters can be added to mitigate the delay but consume
power and die area. Low resistivity copper metallization and low-k
materials such as fluorine-doped SiO2 (FSG) are employed to reduce
the worsening interconnect scalability. In the long term, radically
different on-chip interconnect topology is needed to sustain the
transistor density and performance growth rates as in the last
three decades.
Packaging The microprocessor package is changing from its
traditional role of protective mechanical enclosure to a
sophisticated thermal and electrical management platform.Recent
advances in microprocessor packaging include the migration from
wirebond to flip-chip and from ceramic to organic package
substrates. Looking forward, emergingpackage technologies include
the bumpless build-up layer (BBUL) packages, which are built around
the silicon die. The BBUL package provides the advantages of
smallelectrical loop inductance and reduced thermo mechanical
stresses on the die interconnect system using low dielectric
constant (low-k ) materials. This packagingtechnology allows for
high pin count and easy integration of multiple electronic and
optical components.
Power Dissipation Power dissipation increasingly limits
microprocessor performance. The power budget for a microprocessor
is becoming a design constraint, similar to the die area and target
frequency. Supply voltage continues to scale down with every new
process generation, but at a lower rate that does not keep up with
the increase in the clock frequencyand transistor count. Power
increases with frequency for two processor architectures and the
last two process generations. Architectural techniques like on-die
power management, and circuit methods such as clock gating and
domino to static conversion, are employed to control the power
increase of future microprocessors.
Clock Speed Microprocessor clock speed increases with faster
transistors and longer pipelines. Frequency scales with process
improvements for several generations of Intel microprocessors with
different microarchitectures. Holding process technology constant,
as the number of pipeline stages increase from 5 to 10 to 20 from
the original Intel Pentium through the Pentium 4, clock speeds are
significantly increased. Frequency increases have translated into
higher application performance. Additional transistors are used to
reduce the negative performance impact of long pipelines; an
example is increasingly sophisticated branch predictors. Process
improvements also increase clock speed in each processor family
with similar number of pipe stages. Later designs in a processor
family usually gain a smaller frequency advantage from process
improvements because many micro -architectural and circuit tunings
have been realized in earlier designs. Some of the later
microprocessors are also targeted to a power-constrained
environment that limits their frequency gain.
Cache Memory Microprocessor clock speeds and performance demands
have increased over the years. Unfortunately, external memory
bandwidth and latency have not kept pace. This widening
processor-to-memory gap has led to increased cache sizes and
increased number of cache levels between the processing core(s) and
main memory. As frequency increases, first level cache size has
begun to decrease to maintain low access latency, typically 1 to 2
clocks. As aggregate cache sizes increase in symmetric
multiprocessor systems (SMP), the ratio of conflict, capacity, and
coherency misses, or cache-to-cache transfers, will change. Set
associative caches will see reduction in conflict and capacity
misses relative to cachesize increases. However, these increases
will have smaller impact on coherency misses in large SMP systems.
This motivates system designers to optimize for
cache-to-cachetransfers over memory-to-cache transfers. Two
approaches to achieve this are hyper-threading, also known as
multithreading, and chip multiprocessing (CMP).
Input/Output Performance increases lead to higher demand for
sustainable bandwidth between a microprocessor and external main
memory and I/Os. This has led to faster andwider external buses .
High-speed point-to-point interconnects is replacing shared buses
to satisfy increasing bandwidth requirements. Distributed
interconnects will provide a more scalable path to increase
external bandwidth when practical limit of a pin is reached.
CHAPTER 4FEATURES OF TODAYS MICROCOMPUTERS. Todays
microcomputers address the challenge of high bandwidth and the need
for state-of- the-art computational performance. They are faster,
smaller and affordable. The current state of the art microcomputer
system designs have the following features common amongst them:
Modular Architecture To support the creation of a wide range of
implementations the architecture supports modular implementations.
A basic implementation might comprise a single processor unit with
four functional units. By replicating those design elements, an
implementation can be built that includes a few or even hundreds of
processors, each with four functional units, each of which can
operate on many data items simultaneously with parallel-operation
(SIMD) instructions. Conversely, a tiny application specific
implementation can be derived from the basic one by trimming the
complement of functional units down to one or two and/or removing
hardware support for any instructions not needed in its target
application. Software Portability The architecture is designed to
efficiently execute code generated by installation-time or
just-in-time (JIT) compilation techniques. This allows
implementations to evolve over time without accumulating the
baggage required to support old binaries, as traditional
architectures have always done. Instead, software portability
across implementations is obtained through use of
architecture-neutral means of software distribution. Multiple
Levels of Parallelism The architecture provides the ability to
exploit parallelism at many levels - at the data word level through
SIMD instructions, at the instruction level through multiple
functional units per processor, at the thread-of-execution level
through support for multithreaded software, and at the system level
through its intrinsic support for "MPs-on-a-chip" (multiple
processor units per implementation). An implementation with more
than one functional unit per processor unit provides MSIMD:
multiple single instructionmultiple-data parallelism. Multiple
Processor Units per Cluster Although an implementation can be a
single processor unit, many architectures today explicitly
incorporates the concept of multiple processors per implementation.
Given 21st century semiconductor density, each such array of
processor units or processor cluster" can be implemented on a
single chip. As semiconductor technology advances, clusters with
more processors per chip can be implemented. Multiple Functional
Units per Processor Unit Processor unit can issue multiple
instructions simultaneously, one to each of its functional units.
Most implementations are expected to provide two to four functional
units per processor unit. Multithreaded SoftwareExecution of
multithreaded software comes naturally given the architecture's
ability to execute multiple threads simultaneously on multiple
processor units.
SIMD Instructions At the lowest level of parallelism, several
architectures provide SIMD (Single Instruction/ Multiple Data) or
"vector" instructions. A SIMD instruction executing in a single
functional unit could perform the same operation on multiple data
items simultaneously. Integral Support for Media-Rich Data The
architecture is particularly well-suited for processing media-rich
content because it directly supports common media data types and
can process multiple simultaneous operations on that data.
Processing power is multiplied on three levels: Single
Instruction/Multiple Data (SIMD) DSP-like instructions in each
functional unit, multiple functional units per processor unit, and
multiple processor units per processor cluster. Balanced
Performance: Processor versus Memory and I/O The microcomputer
implementation is designed to utilize several techniques to balance
processor speed with access to external memory and I/O
devices:100's of general-purpose registers per processor unit,
which reduce the frequency of memory accesses Load-Group
instructions, which increase bandwidth into the processor by
simultaneously loading multiple registers from memory or an I/O
device Store buffering, which increases bandwidth out of the
processor by optimizing Store operations initiated by software.
Data Type-Independent Registers The general-purpose register
file in some microcomputer implementation is data type-agnostic:
any register can hold information of any data type and be accessed
by any instruction. In particular, there is no distinction between
integer and floating-point registers. This allows registers to be
allocated as needed by each application, without restrictions
imposed by hardware partitioning of the register set. Instruction
Grouping Grouping instructions across multiple functional units can
be performed dynamically in hardware (as in a superscalar
processor), statically by a compiler, or by some combination of the
two. Rather than devoting valuable chip area to hardware grouping
Logic. Data and Address Size Implementation may implement either
32- or 64-bit addressing and data operations, as dictated by the
needs of its target applications. Context Switch Optimization
Process (task) context switch time can be reduced by using the
architecture's "register dirty bits", which allow an operating
system to minimize the number of registers saved and restored
during a context switch.
4.1 AN OVERVIEW OF TODAYS SINGLE-BOARD MICROCOMPUTERS There are
quite a few single-board microcomputers out there today to choose
from. Some of them offer better performance and more memory than
others; some of them have a great variety of connectors while
others only have the necessary minimum. A part of these devices can
connect to custom hardware through general purpose input-output
pins, while others are more integrated and less customizable. Most
of them are based on the ARM architecture, which restricts their
use to operating systems like Linux, Android, etc., but a few
surprise us with an x86 design and can even run Windows. Although
they are generally small, there still are significant differences
between them in size. Some of them target home users while others
are built for hackers and computer experts. And last, but not
least, the price of these micro computers can differ a lot. These
microcomputers include: Olimex A13 OLinuXino, ODROID-X2,
BeagleBone, Cubieboard, Gooseberry Board, Hackberry Board, FOXG20,
etc. Our focus will be on Raspberry Pi which is undoubtedly the
most famous single-board micro PC. Raspberry Pi Designed and
marketed by the non-profit Raspberry Pi Foundation, manufactured by
Sony UK and sold by Farnell / Element14, the Raspberry Pi is,
without a doubt, the most famous small computer (single-board micro
PC) today. Its creation revolves around a noble cause, The
Raspberry Pi Foundation aims to give the world, especially
children, a very cheap computer which they can use to learn
programming and to put their creativity to work in general.
Released in early 2012, the Raspberry Pi combines some very
appealing hardware characteristics, like fairly good performance
(the 700 Mhz ARM CPU can be overclocked to 1GHz; 256 MB memory for
model A and 512 MB memory for model B), extremely low power
consumption (1.5 W max for model A and 3.5 W max for model B),
which makes it suitable for battery-powered independent projects,
and custom connectivity to special hardware through programmable
GPIO pins. Combine all this with a very low price (25$ for model A
and 35$ for model B) and a large, helping community and you
definitely have a winner if you want to choose a fairly good small
computer which can run Linux for example (or Android, RISC OS,
etc.) and which needs to run all kinds of applications that dont
need a lot of resources (for home use, for a small server or as
part of a custom hardware system). It is probably the best choice
also in case you want to take the easy road into the world of
microcomputers because of its popularity, which translates to a
huge number of Raspberry Pi owners who can and will probably help
you with any questions or problems that you may
encounter.Specifications: CPU: ARM1176JZF-S 700 MHz processor GPU:
VideoCore IV, capable of BlueRay quality playback, with OpenGL ES
2.0 RAM: 256 MB (model A) or 512 MB (model B) Connectors: USB,
HDMI, Composite RCA, 3.5 mm audio jack, 10/100 Ethernet, micro USB
power cord Storage: SD/MMC/SDIO card
Chapter 5CONCLUSION In this paper, I have presented an overview
of the past developments and current state of the art of
microcomputers and microcomputer system design. New trends are
towards High compute density, Low cost, Power efficiency and High
system reliability. From Intel 4004 of 1971, which has 1 core, no
cache, and 23K transistors, to Intel 8008, of 1978, with 1 core, no
cache, 29K, transistors; and Intel Nehalem-EX, 2009, which has 8
cores, 24MB cache, and 2.3B transistors, we see that theres been a
great success in the evolution of microcomputer and its design with
focus on arriving at computers that are faster, smaller, affordable
and more efficient. The aim of a microcomputer designer is to
achieve an architecture thatd result in control of complexity,
maximization of performance and minimization of cost; and presently
we have small PCs like Raspberry Pi beginning the second evolution
of microcomputers after Commodore for the first. Besides the ever
increasing transistor count and device speed, future
implementationtechnologies also pose significant challenges to the
architect to get the highest benefit and best possible performance
from the technology advances. The scope and range of microcomputer
capabilities broadens every day as software becomes more
sophisticated, but are largely dependent upon the data-storage
capacity and data access speed of each computer.
REFERENCES1. An overview and comparison of todays single-board
microcomputers. (January 2013). Retrieved May 15, 2014, from
http://www.iqjar.com/jar/an-overview-and-comparison-of-todays-single-board.2.
Ceruzzi: A History of Modern Computing, Second Printing, 1999.3. G.
Moore, Cramming more components onto integrated circuits,
Electronics, Vol. 38, No. 8, April 19, 1965.4. Goldstine: The
Computer: From Pascal to Von Neumann, 2nd printing, 1973.5. Hadi
Esmaeilzadeh, Ting Cao, Xi Yang, Stephen M. Blackburn, and Kathryn
S. McKinley. Looking back and looking forward: power, performance,
and upheaval. CACM 55, 7 (July 2012), 105-114.6. Hennessy &
Patterson: Computer Architecture A Quantitative Approach, 2nd ed.,
1996.7. McGraw-Hill Science & Technology Encyclopedia:
Microcomputer.8. "Microcomputer." Microsoft Encarta 2009 [DVD].
Redmond, WA: Microsoft Corporation, 2008. 9. Shekhar Borkar, Andrew
A. Chien, The Future of Microprocessors. Communications of the ACM,
Vol. 54 No. 5, Pages 67-77 10.1145/1941487.1941507.10. Standard
Performance Evaluation Corporation.(April 2002). Retrieved May 16,
2014, from http://www.spec.org.11. TRENDS IN MICROCOMPUTER
TECHNOLOGY J. E. Bass Rockwell International, 1977.17