CHAPTER 1: INTRODUCTION TO COMPUTER GRAPHICS 1.1. INTRODUCTION: The source and art of creating, storing, and manipulating pictures with the aid of computer is called computer graphics. Computer graphics allows communication through pictures, charts, and diagrams. Nowadays computer graphics is playing a key role in every application. Interactive Computer Graphics (active): In Interactive computer graphics, user controls the contents, structure and appearance of objects and their displayed images by using input devices such as keyboard, mouse or touch sensitive panel on the screen. Interactive computer graphics involves two-way communication. The computer receives signals from input device and modifies the displayed image independently. We can make pictures of not only real world objects, but also of abstract, mathematical surfaces in 3D and of data that have no inherent geometry. Non-interactive computer graphics (passive): In non-interactive computer graphics the user has no control on the contents, structure and appearance of objects and their displayed images. A major use of computer graphics is in design, particularly for engineering and architectural systems. Computer aided design is a segment in computer graphics which helps in creation, modification or optimization of a design or product such as buildings, automobiles, aircraft, water craft, space craft, textiles etc. Applications: Computer graphics is used in various fields of science, engineering, medicine, research, entertainment, advertisement. Some of the major application areas of computer graphics are listed below: (i) Construction: Computer graphics is a very useful tool for generating architect’s drawings and visualizing structures. Taking data about buildings, the computer can model various images of a building from different angles. Architects can use interactive graphics 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
CHAPTER 1: INTRODUCTION TO COMPUTER GRAPHICS
1.1. INTRODUCTION:
The source and art of creating, storing, and manipulating pictures with the aid of
computer is called computer graphics. Computer graphics allows communication
through pictures, charts, and diagrams. Nowadays computer graphics is playing a
key role in every application.
Interactive Computer Graphics (active): In Interactive computer graphics, user
controls the contents, structure and appearance of objects and their displayed
images by using input devices such as keyboard, mouse or touch sensitive panel on
the screen. Interactive computer graphics involves two-way communication. The
computer receives signals from input device and modifies the displayed image
independently. We can make pictures of not only real world objects, but also of
abstract, mathematical surfaces in 3D and of data that have no inherent geometry.
Non-interactive computer graphics (passive): In non-interactive computer
graphics the user has no control on the contents, structure and appearance of
objects and their displayed images. A major use of computer graphics is in design,
particularly for engineering and architectural systems. Computer aided design is a
segment in computer graphics which helps in creation, modification or optimization
of a design or product such as buildings, automobiles, aircraft, water craft, space
craft, textiles etc.
Applications: Computer graphics is used in various fields of science, engineering,
medicine, research, entertainment, advertisement. Some of the major application
areas of computer graphics are listed below:
(i) Construction: Computer graphics is a very useful tool for generating
architect’s drawings and visualizing structures. Taking data about buildings, the
computer can model various images of a building from different angles. Architects
can use interactive graphics procedures to layout floor plans such as positioning of
rooms, doors, windows and other building features.
(ii) Electronic design: Electronic devices like radio, television, calculators and
computers are made up of many small electronic components, mounted on printed
circuit boards. Using interactive computer graphics system, engineers can draw the
circuit in less time. The computer can help in checking the design and making
modifications to the design in less time.
(iii) Mechanical Design: Computer Graphics is used to design systems of
mechanical devices. Before manufacturing any machine, an engineer first designs
and simulates it on the computer.
1
(iv) Entertainment and Animation: Computer graphics is playing main role in
making motion pictures and games. Computer game is an audiovisual engine capable
of keeping an internal model of some dynamic system. Games are based on
interaction between the user and the computer.
(v) Education and training: Computer graphics is used to produce pictures that
express a message and attract attention. Computer generated pictures are often
used as educational aids. For some training applications, special systems are
generated. Examples of specialized systems are simulators for practice sessions,
training of aircraft pilots, and training of ship captains. Finally, the slide production
for commercial, scientific and educational presentations is another use of graphics.
(vi) Medical Technology: Today, Computer Assisted Tomography (CAT) is used to
compose 3D-model of the brain by taking multiple X-rays. CAT image is very useful
in detecting various diseases like brain tumors. Computers are also useful to
generate images of delicate organs. It is useful to study the effect of drugs on
tissues. Medical applications also make extensive use of image processing
techniques for picture enhancements.
(vii) Cartography: Computer graphics is used to produce accurate and schematic
representation of geographical and other natural phenomena from measurement
data. Examples include geographic maps, exploration maps, for drilling and mining,
oceanographic charts, weather maps, and population density maps.
(viii) Business: It is one of the simplest and most widely used applications of
computer graphics. Graphs and charts are common requirement of any business
profile presentation. Using bar chart we can present comparison information easily by
taking data along the axes. The charts can be used to show percentage breakup of
individual component in an object.
1.2 GRAPHICS SYSTEM SOFTWARE (GRAPHICS STANDARDS):
Graphics software is a fundamental component of any graphics display system.
Systems with most expensive and sophisticated hardware devices would be useless
for many people if they did not have predominance and user-friendly software. In
1979 the Graphics Standard Committee (GSC), designed a Core Graphics System
(CORE). This system provided standardized set of commands to control the
construction and display of graphic images. These commands were independent of
the devices used to create the display, the image, and independent of the language
in which the graphics program was written. The CORE system contained mechanisms
for describing and displaying both 2D and 3D structures.
A second standard called Graphic Kernel System (GKS) was developed in Europe. GKS
has excellent two dimensional features. GKS did not contain the constructs for 3D
2
objects. It introduced the concept of workstations, which allowed a single graphic
program to control several terminals.
Another graphics standard is the Programmer’s Hierarchical Interactive Graphic
Standard (PHIGS). It includes both the features of CORE and GKS. PHIGS also
supports a detailed database of structures that the programmer may edit selectively.
PHIGS presents a specification for graphics functions, it does not provide a standard
methodology for a graphics interface to output devices. It does not specify method
for transmitting and storing picture.
Two other graphics standards are the Computer Graphics Metafile (CGM) and the
Computer Graphics Interface (CGI). CGM is a file format for picture information that
allows device independent capture, storage and transfer. The CGI is a companion
standard which provides a procedural interface for CGM primitives.
1.3 INTRODUCTION TO GEOMETRY:
Points:
A point in a plane can be specified with an ordered pair of numbers (x, y), where x is the
horizontal distance from the origin and y is the vertical distance from the origin.
Fig 1.1
Lines:
Two points will specify a line. Lines are described by equations such that if a point (x, y)
satisfies the equations, then the point is on the line. If the two points used to specify a line
are (x1, y1) and (x2, y2), then the equation of the line is given by:
(y-y1)/(y2-y1) = (x-x1)/ (x2-x1)
The distance between any two points p1 and p2 with co-ordinates (x1, y1) and (x2, y2) can be
found using the formula, D = (x2-x1) 2+(y2-y1) 2 1/2
3
Fig 1.2
Line Segment:
Sometimes in computer graphics we need to display only pieces of lines. Let’s consider only
those points on a line which lie between end points P1 and P2 is called line segments.
Slope:
In a coordinate system the description of a line involves an equation, which enables us to
find the coordinates of all those points, which make up the line. The fact that line is straight
is incorporated in the quantity called the slope (m) of the line.
Slope, m= tan θ
Fig 1.3
Where θ is angle formed by the line with the positive x-axis.
Midpoint: The point half way between x-axis and the point half way between y-axis is the
mid point.
4
Fig 1.4
Vector:
A vector is represented by a line segment that all have the same length and magnitude. A
vector may be denoted (Dx, Dy), where Dx indicates how far to move along the x-axis
direction and Dy indicates how far to move along the y-axis direction.
Fig 1.5
Vectors have no fixed position in space. They tell us how far and what direction to move, but
they do not tell us where to start. The idea of vector is useful because it closely parallels the
manner in which a pen draws lines on paper or an electron beam draws lines on a cathode
ray tube. Two vectors may be added by adding their respective components.
V1+V2= [Dx1, Dy1]+[Dx2, Dy2]
= Dx1+Dx2, Dy1+Dy2
We can also multiply vectors by a number by multiplying each of its components.
n.V= n[Dx,Dy]
= [nDx,nDy]
This preserves the vectors direction but changes its magnitude. The vectors length gives a
measure of that magnitude.
|V| = (Dx2
+Dy2
) 1/2
If we multiply a vector by the reciprocal of its length, the result is a vector with length equal
to 1. Such a vector is called unit vector.
Pixels and Frame Buffers:
Pixel: In computer graphics, pixel is a short form of picture element. It is the smallest
addressable element on the screen. A pixel has only two color values, typically black and
white and it can be encoded by 1 bit of information. If more than 2 bits are used to
represents a pixel, a large number of colors can be represented.
5
Fig 1.6 Pixel
Frame Buffer: The intensity values of all the pixels for a computer-modeled object are
placed into an array in the computer’s memory. This array, which contains an internal
representation of the image, is called a frame buffer. It collects and stores pixel values for
use by the display device.
Fig 1.7 A single bit Frame buffer raster CRT graphic device
1.4 VIDEO DISPLAY DEVICES:
The graphics capabilities of a computer system basically depend on the hardware
attached to it and also on the software support available. Engineering design
applications demand high quality graphic images. Additional devices are required to
produce such high quality images. Many graphic output devices are available now
with varying capabilities to display images and produce engineering drawings.
The most commonly used display technologies with personal computers are:
Cathode Ray Tube Displays (CRT displays)
Flat-Panel Displays
1.4.1 Cathode Ray Tube Displays:
Cathode Ray Tube:
Cathode Ray Tube has always been the predominant display device for interactive
computer graphics. CRT is reliable, inexpensive and capable of displaying rapidly
changing pictures. The main problems with CRT are the very high voltage it requires,
its bulkiness and its weight. In a cathode ray tube, there is a cathode, which when
heated emits a continuous stream of electrons. The beam of electrons passes
6
through focusing and deflection systems that direct the beam towards specific
positions on the phosphor-coated screen. The phosphor then emits a small spot of
light at each position struck by the electron beam. The light output of phosphor falls
off rapidly after the electron beam has passed by, and a steady picture is maintained
by tracing it out rapidly and repeatedly. This process is called refresh and is
performed at least 30 times a second.
Electron Gun: The electron gun makes use of the electrostatic fields to focus and
accelerate the electron beam. A field is generated when two surfaces are raised to
different potential; electrons within the field tend to travel toward the surface with
more positive potential. The force attracting the electron is directly proportional to
the potential. Figure below illustrates the basic structure of a CRT. A CRT is an
evacuated glass tube, with a heating element at one end and a phosphor-coated
screen on the other end. When a current flows through this heating element (called
filament), the conductivity of the metal filament is reduced due to the high
temperature. This causes electrons to pile up on the filament, because they cannot
move as fast as they would like to. Some of these electrons actually boil off the
filament.
These free electrons are attracted towards a strong positive charge from the outer
surface of the focusing anode cylinder (sometimes called an electrostatic lens).
However, the inside of the cylinder has a weaker negative charge. Thus when the
electrons head toward the anode, they are forced into a beam and accelerated by the
repulsion of the inner cylinder walls. By the time the electrons get out they are going
so fast that they fly past the cathode they were heading for.
Next the electrons run into two sets of weakly charged deflection plates. These plates
have opposite charges, one positive the other negative. While their charge is not
strong enough to capture the fast moving electrons, they do influence the path of the
beam. The first set displaces the beam up and down (vertically), and the second
displaces the beam left and right (horizontally). The electrons are sent flying out of
the neck of the tube, called a yoke, until they pierce into the phosphor coating on the
other end of the tube. The impact of this collision on the out valence bands of the
phosphor compounds knocks some of the electrons to jump into another band. This
causes a few photons to be generated, and results in a spot on the CRT's face.
7
Fig 1.8 Construction of a CRT
CRTs were embraced as output devices very early in the development of digital
computers. Most likely, oscilloscopes were some of the first computer graphics
displays. The results of computations could be used to directly drive the vertical and
horizontal deflection plates in order to draw lines on the CRT's surface. By varying
the current to the heating filament, the output of the electron beam could also be
controlled. This allowed the intensity of the lines to vary from bright to completely
dark. These early CRT displays were called vector, calligraphic or stroke displays. The
most commonly used cathode ray tube display technologies are:
Raster Scan Displays
Random Scan Displays
Raster scan displays: During the late 50s and early 60s, broadcast television,
began to take off. Televisions are basically oscilloscopes. The main difference is that
instead of having complete control over the vertical and horizontal deflection, a
television sweeps its trace across the entire face in a regular fixed pattern. This
scanning pattern proceeds from the top-left of the screen to the bottom-right as
shown in the diagram. The result is that the entire screen is painted every 1/30th 30th
of a second.
8
Fig 1.9 Raster Display mechanism
Televisions were mass produced and inexpensive. For a computer to paint the entire
screen it needs only to synchronize it’s painting with the constant scanning pattern of
the raster. The solution to this problem was to add a special memory, which operated
synchronous to the raster scanning of the TV. But, while televisions were cheap,
memory wasn't. So there was a long period where the patterns were scanned out of
cheap high-density read-only memories, called character generators. The trick was to
use a single 8 bit code to specify an 8 by 12 character pattern from the ROM, and
with a few addressing tricks one could build a nice display (80 by 25 character) with
only 2 kilobytes of memory. Thus the era of the CRT-terminal was born.
There were a few attempts at building systems with downloadable or programmable
character generators. And a few systems added an extra byte to specify the
foreground and background colors of the character cell. Lots of tank/maze arcade
games in the 70's worked this way.
9
Fig1.10 Raster scan Display System
Random-Scan Displays:
Random scan displays are used to draw straight lines. A straight line can be directly
drawn from any addressable point to any another addressable point. These devices
are also called Calligraphic (vector refresh) displays. Random-scan display CRT
directs electron beam only to the parts of the screen where a picture is to be drawn.
Refresh rate on random scan system depends on the number of lines to be displayed.
Picture definition is stored as a set of line drawing commands in an area of the
memory called as display file or refresh buffer. To display an image, the system
cycles through the set of commands in the display file. After all commands have been
processed, the system cycles back to the first drawing command in the list. Random
scan displays are designed to draw all commands of a picture 30 to 60 times per
second. These displays are suitable for line drawing applications like engineering &
scientific drawings. Raster scan displays are used to provide animation.
Fig 1.11a Random scan display system
Fig1.11b Block Diagram of Functions performed by Random Scan Systems
10
Fig 1.11c Random Scan Drawing
Shadow mask CRT: color CRT's are more complicated than the simple monochrome
models summarized above. The phosphors on the face of a color CRT are laid out in a
precise geometric pattern. There are two primary variations, the stripe pattern of in-
line tubes shown on the left, and the delta pattern of delta tubes as shown on the
right. Within the neck of the CRT there are three electron guns, one each for red,
green, and blue (the actual beams are all invisible). There is also a special metal
plate just behind the phosphor cover front face, called a shadow mask. This mask is
aligned so that it simultaneously allows each electron beam to see only the
phosphors of its assigned color and blocks the phosphor of the remaining two colors.
Fig 1.13a stripe and delta patterns used in a shadow mask CRT
11
Fig 1.13b Shadow mask CRT
A significant portion of the electron beam's energy strikes the mask rather than the
phosphors. This has two side effects. The shadow mask has to be extremely rigid to
stay aligned with the phosphor patterns on the CRT surface. The collision of electrons
with metal mask causes the mask to emit some of its absorbed energy as
electromagnetic radiation. Most of this energy is in the form of heat, but some
fraction is emitted as X-rays, which presents a health hazard. This wasn't a problem
for television because the intensity of the X-ray radiation falls off quickly as you move
away from the screen. However, computer monitors are supposed to be viewed from
a short distance. This health concern along with the high voltages and power
dissipations of CRTs has motivated the development of new display technologies.
Direct View Storage Tube (DVST):
In direct view storage tube display, the picture information is stored inside the CRT
instead of refreshing the screen. A line or character will remain visible until erased. A
Direct View Storage Tube (DVST) stores the picture information as charge distribution
just behind the phosphor coated screen. Two electron guns are used in a DVST. One
is primary gun used to store the picture pattern; other is the flood gun, which
maintains the picture display.
The beam is not directed on to the phosphor. The beam is to write on mesh wire grid
coated with dielectric and mounted just behind the screen. A pattern of positive
charge is deposited on the grid and this is transferred to the phosphor by continuous
flood of electrons issuing from a separate flood gun. Just behind the storage mesh is
a second grid, the collector whose main purpose is to smooth out flow of flood
electrons. These electrons pass through the collector at low velocity and are
attracted towards the positively charged portions of storage mesh. Electrons not
repelled by the storage mesh pass right through it and strike the phosphor.
12
Fig.1.14 Direct View Storage Tube (DVST)
In DVST display, the images do not need refreshing. Using DVST, very complex
pictures can be displayed at high resolutions without flicker. The main drawback of
DVST display is that selected portions of a picture cannot be erased. To eliminate
picture selection the entire screen must be erased and the modified picture redrawn.
DVST displays do not display color.
Beam Penetration CRT: “NOTES TO BE WRITTEN”
1.4.2 Flat Panel Displays:
Flat Panel Display Devices take very less space, less weight and less power
requirements. Flat Panel Displays are used in small TV monitors, Calculators, Pocket
Video Games, Laptop Computers, palmtops etc. Flat Panel Displays are classified in
to two categories: Emissive Displays and Non-Emissive Displays.
Emissive displays:
Emissive displays are devices that convert electrical energy into light. Examples are:
Plasma panels.
Electroluminescent displays.
Light emitting diodes.
Plasma panels:
Plasma panels are also called gas-discharge displays. They are constructed by filling
the region between two glass plates by an inert gas. A series of horizontal electrodes
are placed on one glass panel and a set of vertical electrodes is placed in another
panel. The two sheets of glass are separated by a few thousands of an inch. By
applying voltage to the electrode, the gas is made to ionize and form into tiny cells. A
cell is made to glow by placing a firing voltage across the corresponding x and y
electrodes and the required image can be generated. As the gas within the cell
13
discharges rapidly, a high-frequency alternating voltage is applied across the
electrodes, called as sustaining voltage. The cell glow can be switched off by lowering
the sustaining voltage. Thus plasma panel allows both selective writing and selective
erasure. It produces a very steady, flicker free image and is less bulky device when
compared to a CRT. Its main disadvantage is relatively poor resolution, abou6t 60
dots/inch, and its complex addressing and writing mechanisms.
Fig. 1.15 Plasma Panel Display
Electro luminescent displays:
Thin film electro luminescent displays are similar in construction to a plasma panel.
The difference is that the region between the glass plates is filled with a phosphor,
such as zinc sulfide doped with manganese instead of a gas. When sufficiently high
voltage is applied to a pair of crossing electrodes, the phosphor becomes a conductor
in the area of the intersection of two electrodes. Electro luminescent displays require
more power then plasma panels. Good color and gray scale displays are hard to
achieve.
Light emitting diodes:
Light emitting diode displays consists of diodes arranged in the form of a matrix.
Each diode at the intersection of row and column defines a pixel position. The
intensity values of pixels are stored in refresh buffer. Information is read from refresh
buffer and converted to voltage levels that are applied to the diode to produce the
light patterns in the display.
Non-emissive displays:
Non-emissive displays are using optical effects to convert sunlight or light from other
source into graphic patterns. Liquid crystal displays are considered in this category.
Liquid Crystal Displays (LCDs):
Liquid crystal display is a flat panel display technology, which makes it less bulky
than cathode ray tube technology. These displays are commonly used in Laptops,
calculators and other small systems. Liquid crystal displays produce a picture by
14
passing polarized light from the surroundings or from an internal light source through
a liquid crystal material.
The LCD is made up of six layers. The front layer is vertical polarized plate. Second is
a layer with thin grid wires electro deposited on the surface adjoining the crystals.
Third is a thin liquid crystal layer with horizontal grid wires on the surface next to the
crystals, then a horizontal polarized and finally a reflector.
Fig. 1.16a Layers of a Liquid crystal display
Currently, the most popular alternative to the CRT is the Liquid Crystal Display (LCD).
LCDs are organic molecules that, in the absence of external forces, tend to align
themselves in crystalline structures. But, when an external force is applied they will
rearrange themselves as if they were a liquid. Some liquid crystals respond to heat,
others respond to electromagnetic forces.
When LCDs are used as optical (light) modulators, they are actually changing
polarization rather than transparency. In their unexcited or crystalline state, the LCDs
rotate the polarization of light by 90 degrees. In the presence of an electric field,
LCDs behave like a liquid and align the small electrostatic charges of the molecules
with the impinging E field.
15
Fig 1.16b &c
The LCDs transition between crystalline and liquid states is a slow process. This has
both good and bad side effects. LCDs, like phosphors, remain "ON" for some time
after the E field is applied. Thus the image is persistent like CRT's, but this lasts just
until the crystals can realign themselves; thus they must be constantly refreshed, like
a CRT.
Fig 1.17 LCD Construction
Rather than generating light like CRTs, LCDs act as light values. Therefore, they are
dependent on some external light source. In the case of a transmissive display,
usually some sort of backlight is used. Reflective displays take advantage of the
ambient light. Thus, transmissive displays are difficult to see when they are
overwhelmed by external light sources, whereas reflective displays can't be seen in
the dark.
The LCDs themselves have extremely low power requirements. A very small electric
field is required to excite the crystals into their liquid state. Most of the energy used
by an LCD display system is due to the back lighting. In scanned displays, with a
large number of pixels, the percentage of the time that LCDs are excited is very
16
small. Thus the crystals spend most of their time in intermediate states, being
neither “On” nor "Off". This behavior is indicative of passive displays. We might
notice that these displays are not very sharp and are prone to ghosting.
1.5 LOGICAL AND PHYSICAL INTERACTIVE DEVICES:
Input devices allow us to communicate with computer. Using these devices one can
feed in the information. The most commonly used input devices are:
Keyboard.
Mouse.
Digitizer.
Track ball.
Light pen.
Joystick.
Image scanners.
Touch panels.
Plotters.
Keyboard: The keyboard is primarily used for entering text. Keyboards can also be
provided with features to facilitate entry of screen coordinates, menu selections or
graphic functions.
An alphanumeric keyboard on a graphics system is used as a device for entering text
strings. Cursor-control keys and function keys allow user to enter frequently used
operations in a single key stroke, and can be used to select displayed objects or
coordinate positions by positioning the screen cursor. Other types of cursor
positioning devices, such as trackball or joystick, are included on some keyboards.
Additionally a numeric keypad is often included on the keyboard for fast entry of
numeric data.
For special applications, input to a graphics application may come from a set of
buttons, dials, or switches that select data values or customized graphics operations.
Mouse: Mouse is an integral part of the fancy graphical user interface of any
software application. There is a cursor in the shape of an arrow or cross hair always
associated with a mouse; wheels or rollers on the bottom of the mouse can be used
to record the amount and direction of movement. Another method for detecting the
mouse motion is with an optical sensor. For these systems, the mouse is moved over
a special mouse pad that has a grid of horizontal and vertical lines. The optical
sensor detects movement across the lines in the grid.
17
Since a mouse can be picked up and put down at another position without change in
cursor movement, it is used for making relative changes in the position of the screen
cursor. One, two or three buttons are usually included on the top of the mouse for
signaling the execution of some operation, such as recording cursor position or
invoking a function.
Fig 1.18 Keyboard and Mouse
Digitizer: A digitizer can be used to input coordinate values in either two-
dimensional or three-dimensional space. Typically, a digitizer is used to scan over a
drawing or object and to input a set of discrete coordinate positions, which can be
joined with straight-line segment to approximate the curve or surface shapes. One
type of digitizer is the graphics tablet, which is used to input two-dimensional
coordinates by activating a hand cursor or stylus at selected positions on a flat
surface. A hand cursor contains cross hairs for sighting positions, while a stylus is a
pencil –shaped device that is pointed at positions on the tablet.
Many graphics tablets are constructed with a rectangular grid of wires embedded in
the tablet surface. Electromagnetic pulses are generated in sequence along the
wires, and an electric signal is induced in a wire coil in an activated stylus or hand
cursor to record a tablet position. Acoustic tablets use sound waves to detect a stylus
position. Either strip microphones or point microphones can be used to detect the
sound emitted by an electrical spark from a stylus tip. Three-dimensional digitizers
use sonic or electromagnetic transmissions to record positions.
Track ball: A Track ball is a ball that can be rotated with the fingers or palm of the
hand to produce screen cursor movement. Potentiometers attached to the ball
measure the amount and direction of rotation. Track ball is some sort of an inverted
mouse where the ball is held inside a rectangular box. In mouse, the positioning of
the cursor on the screen is associated with the orientation of the track ball.
18
A track ball is a two-dimensional positioning device; a Space ball provides six degrees
of freedom. Unlike the track ball, a space ball does not actually move. Strain gauges
are used to pick up the pressure applied to the space ball to provide input for spatial
positioning and orientation as the ball is pushed or pulled in various directions. Space
balls are used for three-dimensional positioning and selection operations in virtual-
reality systems, modeling, animation, CAD and other applications.
Fig 1.19 The track ball
Light Pen: Light pen is a pointing device. It has a light sensitive tip, which is excited
by the light emitted, when an illuminated point on the screen comes in its field of
view. They are sensitive to the short burst of light emitted from the phosphor coating
at the instant the electron beam strikes a particular point. Other light sources, such
as the background light in the room, are usually not detected by a light pen. An
activated light pen, pointed at a spot on the screen as the electron beam lights up
that spot, generates an electrical pulse that causes the coordinate position of the
electron beam to be recorded. As with cursor-positioning devices, recorded light-pen
coordinates can be used to position an object or to select a processing option.
The light pen has several disadvantages compared to other input devices. When a
light is pointed at screen, part of the screen image is obscured by the hand and pen.
And prolonged use of the light pen can cause arm fatigue. Also, light pens require
special implementations for some applications because they cannot detect positions
within black areas. To be able to select positions in any screen area with a light pen,
we must have some nonzero intensity assigned to each screen pixel. In addition, light
pens sometimes give false readings due to background lighting in a room.
19
Fig 1.20 Light pen
Joystick: A Joystick consists of a small, vertical level mounted on a base that is used
to steer the screen cursor around. Most joysticks select screen positions with actual
stick movement, others respond to pressure on the stick.
The distance that the stick is moved in any direction from its center position
corresponds to screen –cursor movement in that direction. Potentiometers mounted
at the base of the joystick measure the amount of movement, and springs return the
stick to the center position when it is released. One or more buttons can be
programmed to act as input switches to signal certain actions once a screen position
has been selected.
In another type of movable joystick, the stick is used to activate switches that cause
the screen cursor to move at a constant rate in the direction selected. Eight switches
arranged in a circle, are sometimes provided, so that the stick can select any one of
eight directions for cursor movement. Pressure-sensitive joysticks also called
isometric joysticks have a nonmovable stick. Pressure on the stick is measured with
strain gauges and converted to movement of the cursor in the direction specified.
Touch panels: Touch panels allow the screen positions to be selected by touching
the screen with the finger. A touch panel is a transparent plate that fits over the CRT
screen. A typical application of touch panels is for the selection of processing options
that are represented with graphical icons. Touch input can be recorded using optical,
electrical, or acoustical methods. Optical touch panels employ a line of infrared light–
emitting diodes along one vertical edge and along horizontal edge of the frame. The
opposite vertical and horizontal edges contain light detectors. These detectors are
used to record which beams are interrupted when the panel is touched. The two
crossing beams that are interrupted identify the horizontal and vertical coordinates of
the screen positions selected.
An electrical touch panel is constructed with two transparent plates separated by a
small distance. One of the plates is coated with a conducting material, and the other
20
plate is coated with a resistive material. When the outer plate is touched, it is forced
into contact with the inner plate. This contact creates a voltage drop across the
resistive plate that is converted to the coordinate values of the selected screen
position.
In acoustic touch panels, high-frequency sound waves are generated in the horizontal
and vertical directions across a glass plate. Touching the screen causes part of each
wave to be reflected from the finger to the emitters. The screen position at the point
of contact is calculated from a measurement of the time interval between the
transmission of each wave and its reflection to the emitter.
Plotters: Plotters are used to produce large drawings. The most commonly
encountered plotters use ink pens to generate drawings, but many plotting devices
now employ laser beaming inkjet sprays and electrostatic methods.
Drafting layouts and other drawings are typically generated with ink-jet or pen
plotters. A pen plotter has one or more pens mounted on a carriage, or crossbar, that
spans a sheet of paper. Pens with varying colors and widths are used to produce a
variety of shadings and line styles. Wet-ink, ballpoint, and felt-tip pens are all
possible choices for use with a pen plotter. Plotter paper can lie flat or be rolled onto
a drum or belt. Crossbars can be either movable or stationary, while the pen moves
back and forth along the bar. A clamp, vacuum, or an electrostatic charge holds the
paper in position.
Fig 1.21 A desk top pen plotter Fig 1.22 Roll feed pen plotter
Image scanners: Image scanner is used to store drawings, graphics, pictures or text
for computer processing. Once we have internal representation of a picture, we can
apply transformations to rotate, scale or translation. We can also apply various
21
image-processing methods to modify the array representation of the picture. For
scanned text input, various editing operations can be performed on the stored
documents. Some scanners are able to scan either graphical representations or text,
and they come in a variety of sizes and capabilities.
“ Detailed Notes on type of Scanners to be written” and diagrams
CHAPTER: 2 LINE AND CURVE DRAWING ALGORITHMS
2.1 CHARACTERISTICS OF A LINE DRAWING ALGORITHM
Lines should appear straight.
Lines should terminate accurately.
Lines should have equal brightness throughout their length.
Lines density should be independent of line length and angle.
Lines must be drawn rapidly.
The basic idea behind all the line drawing algorithms is to reduce the computations
and provide the result rapidly. There are two well-known classes of line drawing
algorithms:
1. Digital Differential Analyzer (DDA).
2. Bresenham’s Algorithm.
2.2 DIGITAL DIFFERENTIAL ANALYZER (DDA):
22
This algorithm works on the principle of obtaining the successive pixel values based
on the differential equation governing that line. We know that for a straight line,
dy/dx = Constant
(i.e.) Δy /Δx =(y2-y1)/(x2-x1)
Hence we can obtain value of x or y by using its previous values. The commonly used DDA
techniques for straight-line scan conversion are:
Simple DDA.
Symmetric DDA.
Simple DDA:
In this method, we sample the line at unit intervals in one coordinate and determine
corresponding integer values nearest the line path for the other coordinate. Consider
a line with positive slope as shown in the figure2.1,
Fig 2.1 Positive Slope
If the slope is less than or equal to 1, we sample at unit x intervals (Δx=1) and
compute each successive y value as,
yi+1=yi + m .............................(1)
Subscript i takes integer values starting from 1, for the first point, and increases by 1 unit
until the end point is reached. Since m can be any real number 0 to 1, the calculated y
values must be rounded to nearest integer. For lines with a positive slope greater than 1, we
reverse the roles of x and y. That is we sample at unit y intervals (Δy=1) and calculate each
succeeding x value as,
xi+1=xi + 1/m ..............................(2)
The above calculations (1) & (2) are based on the assumption that lines are to be
processed from left end point to the right end point. If processing is reversed , so that
starting end point is at the right, then either we have Δx = -1 and,
yi+1= yi - m .........................………………..(3)
or (when slope is greater than 1), we have, Δy =-1 with
xi+1 = xi - 1/m .......................…………...(4)
These calculations (3) & (4) can also be used to calculate pixel positions along a line
with negative slope.
23
If the absolute value of the slope is less than 1 and start end point is at the left, we
set Δx=1, and calculate y values with equation (1). When the right end point is at the
right (for the same slope), we set Δx=-1 and obtain the coordinate positions from
equation (3). Similarly, when the absolute value of negative slope is greater than 1,
we use Δy = -1 and equation (4) or we use Δy = 1 and equation (2).
Algorithm:
1. Read the two end points (x1, y1) (x2, y2).
2. Approximate the length of the line:
if abs(x2-x1) > abs(y2-y1) then,
length = (x2-x1)
else
length =abs (y2-y1)
3. Calculate incremental values from starting position to generate next pixel.
xincr = Δx /length
xincr = Δy /length
4. Assign initial values to x, y
x = x1
y = y1
5. Plot the pixel at (x, y) position on screen using
Plot (round (x), round (y))
6. Calculate the values of x and y for the next pixel position.
x =x + xincr
y =y + yincr
7. Plot the pixel at x, y position.
8. Repeat steps 6 and 7 until index reaches to length of the line.
The simple DDA is as accurate as its symmetrical counter part but generates a
different sequence of dots because of different method of estimating line length.
Logically, it is simpler, except for the need to perform an initial division to determine
the incrementing value. The simple DDA is ideal basis for a software line generator,
but the number of divisions makes it less suited to hardware implementation.
Symmetrical DDA:
The DDA works on the principle that we simultaneously increment x and y by small
steps proportional to the first derivates of x and y. In the case of straight line, the
first derivatives are constant and are proportional to Δx and Δy. Thus in the ideal
case of an infinite precision display, a line is generated by incrementing x and y by
Δx and Δy, where is some small quantity. In the real world of limited precision
24
displays, we must generate only addressable points. This can be done by rounding to
the nearest integer after each incremental step. After rounding we display a dot at
the resultant x and y.
An alternative to round is the use of arithmetic overflow. x and y are kept in registers
that have two parts, integer and fractional. The incrementing values, which are both,
less than 1 are repeatedly added to the fractional part, and whenever result
overflows, the corresponding integer part is incremented. This would normally have
the effect of truncating rather than rounding so we initialize the DDA with the value
0.5 in each fractional part to achieve the true rounding. One advantage of this
arrangement is that it allows us to detect changes in x and y and avoids plotting the
same point twice.
Fig 2.2 Symmetrical DDA
2.3 BRESENHAM’S ALGORITHM:An accurate and efficient raster line-generating algorithm is developed by
Bresenham. To illustrate Bresenham’s approach, we first consider the scan
conversion process for lines with positive slope less than 1. Pixel positions along a
line path are then determined by sampling at unit x intervals. Starting from the left
end point (x0,y0) of a given line , we step to each successive column (x position) and
plot the pixel whose scan line y value is closest to the line path. Assuming we have
determined that the pixel at (xk, yk) is to be displayed, we next need to decide which
pixel to plot in column xk+1. Our choices are the pixels at positions (xk+1,yk),
(xk+1,yk+1). At sampling position xk+1, we label vertical pixel separations from the
mathematical line path as d1 and d2. The y coordinate on the mathematical line at
pixel column position is calculated as,
y = m(xk+1)+b
then d1=y-yk = m(xk+1)+b-yk
and d2=(yk+1)-y = yk+1-m(xk+1)-b
The difference between these two separations is, d1-d2=2m(xk+1)-2yk+2b-1. Here
we consider a decision parameter Pk for the kth step in the line algorithm as,
25
Pk =x(d1-d2)
=2y.xk- 2x.yk + c
The (k+1)th step as, Pk+1=Pk+2y-2x(yk+1-yk)
the term yk+1- yk is either 0 or 1, depending on the sign of parameter Pk.
The first parameter P0 is evaluated at the starting pixel position as, P0=2y-x.
yk+1
y d1
yk d2
xk+1
Fig. 2.3 Distance between pixel positions and the line at sampling position xk+1
Bresenham’s Line-Drawing Algorithm for m <1:
1. Input the two line end points and store the left end point in (x0, y0).
2. Load (x0, y0) into the frame buffer; that is, plot the first point.
3. Calculate constants x, y, 2y and 2y-2x and obtain the starting value for the
decision parameter as P0=2y-x.
4.At each xk along the line, starting at k=0, perform the following test:
If Pk<0, the next point to plot is (xk+1,yk) and Pk+1=Pk+2y
Otherwise, the next point to plot is (xk+1,yk+1) and Pk+1=Pk+2y-2x
5.Repeat step 4 x times.
Fig2.4 Pixel positions along a line path
2.4 CIRCLE GENERATION ALGORITHM:
A circle is defined as the set of points that are at given distance r from a center
position (xc, yc). Basically the circle can be drawn using the symmetry property.
Suppose a calculated point is (x,y) then we can obtain 8 different points as,
26
Fig 2.5 8-point Symmetry of A Circle
Bresenham’s Circle Drawing Algorithm:
This is the most efficient method of drawing the circle since,
It avoids the use of trigonometric and power functions
Performs the computations with integers and simple additions, subtractions
It works on the basic principle of obtaining the next pixel value by means of decision
variable and generates the circle with 8-point symmetry property. At each step, when
we obtain a new pixel point there are 3 possibilities:
The point may be on the circle
The point may be outside the circle
The point may be with in the circle
Let p1 be a point, which may be already plotted, correctly. The next computing may
yield a point p2 or p3. Now if we wish to get the point p3 on the circle, one may feel
that we must decrement the value of y but this is not so, because of the following
reason:
We have reached to the point p3 by incrementing the value in x from p1.Thus if we
decrement the value of y at p1 it means that the point p3 will be within the circle.
Hence if we are reaching to p3 within the circle, will be
increment value of x (at p1) and
decrement value of y (at p1)
So if a new pixel is going to be at p3, it implies point is going to be outside the circle.
Hence if a new computed point is going to be outside the circle, then increment the x
value and decrement the y value of previously plotted point.
27
Fig. 2.6 A Circle
2.5 ELLIPSE GENERATION ALGORITHM:
The ellipse like the circle shows symmetry. However, the symmetry is 4-way rather
than 8-way. There are 2 methods of mathematically defining an ellipse.
Polynomial defining an ellipse
Trigonometric method of defining an ellipse
Fig. 2.7 4-point Symmetry of an Ellipse
Bresenham’s Ellipse Generation Algorithm:
This is an incremental method for scan converting an ellipse that is centered at the
origin in standard position. It works in a way that is very similar to the midpoint circle
algorithm. However, because of the 4-way symmetry property we need to consider
the entire elliptical curve in the first quadrant. We define a function f, which can be
used to decide if the midpoint between two candidate pixels is inside or outside the
ellipse.
f(x,y) = b2x2 + a2y2 - a2b2 = k (say)
if k<0 then (x,y) lies inside the ellipse
28
if k=0 then (x,y) on the ellipse
if k>0 then (x,y) outside the ellipse
2.6 SCAN CONVERSION AND GENERATION OF DISPLAY:
In order to display the rasterized image using video techniques, it is necessary to
organize the picture into the precise pattern required by the graphic display. This is
process called scan conversion. In contrast to the display list for a random scan or
line drawing displays, which contains only information about lines or characters, in a
raster scan, the display list must contain organized information and presented at
appropriate frame raster. Three ways of accomplishing this are,
Real Time Scan Conversion.
Run length Encoding
Frame buffer memory
2.7 ALIASING AND ANTIALIASING:
Aliasing: Many raster displays allows only two pixel states (on/off). For these
displays, lines may have jagged or stair case appearance when they step from one
row or column to the next. This phenomenon is called aliasing. Fundamentally the
appearance of aliasing effect is due to the fact that lines, polygon edges and color
boundaries etc., are continuous where as a raster device is discrete. Aliasing is a fact
of any computer image, which depends on a grid of finite number of pixels in the
horizontal and vertical direction.
Fig.2.8 Aliasing Effect
Antialiasing:
Minimizing aliasing effect is called antialiasing. One approach to reduce aliasing
effect is increasing the resolution. Other generally used antialiasing techniques are:
Area Sampling
Super sampling
Pixel chasing
29
In area sampling technique, we superimpose a pixel grid pattern onto the continuous
object definition. In super sampling method, we subdivide every pixel into sub pixels
and sub pixel is tested with the object to be scan converted. In pixel chasing method,
the graphic system is capable of shifting individual pixels.
2.8 IMAGE COMPRESSION:
Two Typical techniques for image compression are run-length encoding and area
image compression. Both take advantage of image coherence.
Run length Encoding:
Run length encoding seeks to take advantage of the fact that large areas of the
picture have the same intensity of color. In simplest form, run length encoding
specify only the number of successive pixels on a given scan line with in given
intensity. Fig 2.9 shows a simple monochrome (black and white) line drawing on a
3030 raster and the associated encoding for scan lines 1 and 30.The encoded data
is to be considered in groups of two. The first is the number of successive pixels on
that scan line with a given intensity, i.e., the count, and the second is the intensity.
Thus, in fig 2.9(a), scan line 1 has 30 pixels of zero intensity, i.e., white or the
background intensity. The complete picture is encoded with 208 numbers.
0
10
20
30 0 10 20 30
Fig. 2.9 (a)
0 1 30
10 7 1 8 0 1 5
20
30
10 20 30
Fig 2.9 (b) Run-length encoded image
30
Run length Intensity
Intensity Run length
0 30
0 30
0 7 1 1 0 0 0 6 5555 555
Pixel by pixel storage, i.e., one piece of information for each pixel, requires 900
intensity values for the 3030 raster of fig 2.9(a). The data compression using run-
length encoding in this case is 1:4.33, or 0.231.
Solid figures are easily handled with run-length encoding. This is shown in fig 2.8(b),
along with the encoding for scan lines 1,15,30. Of particular interest is scan line 15.
For fig 2.9(b) the entire picture is encoded using 136 numbers for a data compression
of 1:6.62, or 0.151.
This simple run-length encoding scheme is easily extended to include color. For color,
the intensity of each of the red, blue and green color guns is given, preceded by the
number of successive pixels for that color on that scan line.
Run length Red intensity Green intensity Blue intensity
E.g. For simple color display in which each individual color gun is either completely
off (0) or fully on (1), the encoding for scan line 15 of fig 2.8 (b), with a yellow on a
blue background, is
15 0 0 1 7 1 1 0 8 0 0 1
Good run-length encoding schemes identify situations that result in negative data
compression and adaptively switch to pixel-by-pixel storage. A simple adaptive run-
length encoding scheme uses the sign of the number of encoded pixels to switch
between run-length encoding and pixel-by-pixel storage. Specifically, if the sign of
the count is negative, then the pixel-by-pixel data follows; otherwise run-length
encoded data follows. In both cases the magnitude of the count indicates the number
of pixel intensities, or triplets of RGB intensities, that follow. For example,
-3 1 0 0 1 0 1 0 0 1 6 1 2 6
Define three pixels with RGB values of (1,0,0), (0,1,0), (1,0,0), followed by six pixels
with RGB values of (1,2,6).
Area Image Compression:
Area image compression techniques attempt to take the advantage of area
coherence, i.e. the tendency of pixels in adjacent areas to have the same
characteristic. Area coherence fundamentally treats the image as 2D. Consider the
image in fixed-size pixel blocks taken in scan line order. Although pixel blocks can be
31
of any reasonable size, e.g., 22,44,88,1616,3232, etc., we concentrate here
on 88 pixel blocks.
First we determine the average intensity value in each 88 block by adding all the
intensity values, dividing by 64 and truncating to obtain an integer number. The
average intensity is then subtracted from each pixel intensity to yield the differential
image. The run-length encoding scheme discussed in the previous section is now
applied to each row of the differential images.
.
2.9 HALF TONING:
Many displays and hard copy devices are bi-level. They can only produce two
intensity levels. In such displays or hard copy devices we can create an apparent
increase in the number of available intensities. This is achieved by incorporating
multiple pixel positions into the display of each intensity value. Then we view a very
small area and record only the overall intensity of the area. This phenomenon of
apparent increase in the number of available intensities by considering combined
intensity of multiple pixels is known as half toning. The half toning is commonly used
in printing black and white photographs in newspapers, magazines and books. The
pictures produced by half toning process are called half tones.
In computer graphics, half tone reproductions are approximated using rectangular
pixel regions; say 2 × 2 pixels or 3 × 3 pixels. These regions are called halftone
patterns or pixel patterns. The following figures show the halftone patterns to create
number of intensity levels.
Fig. 2.10a 2×2 Pixel Patterns for creating five Intensity Levels
Half toning technique can also be used to increase the number of intensity options on
systems that are capable of displaying more than two intensities per pixel. The
following figure shows 3×3 pixel patterns for creating 10 intensity levels.
32
Fig.2.10b 3×3 Pixel Patterns for creating five Intensity Levels
CHAPTER 3: POLYGONS
3.1 INTRODUCTION:
A polygon is a figure with many sides. A polygon can be represented as a group of
connected edges, forming a closed figure. The line segments, which form the
boundary of a polygon, are called as edges or sides. The point of intersection of two
edges is called as vertex of the polygon. Polygons can be divided in to two types
namely, convex polygons and concave polygons.
Fig 3.1 Polygons
Convex Polygon: A Convex polygon is a polygon in which the line segment joining
any two points within the polygon lies completely inside the polygon.
Concave Polygon: A Concave polygon is a polygon in which the line segment
joining any two points within the polygon may not lie completely inside the polygon.
Regular polygons are those whose length of all edges is equal and internal/external
angle between any two connected edges are same. Irregular polygons are those
polygons that have length of edges different and the angles between any two
connected edges are same.
33
Fig 3.2a Convex Polygons Fig 3.2b Concave Polygon
3.2 POLYGON REPRESENTATION:
The information required to specify the polygon is the number of sides and the
coordinates of the vertex points. To add polygon to our graphic system, we must
decide how to represent it. There are three approaches to represent polygons
according to the graphics system.
Polygon drawing primitive
Trapezoidal Primitive
Lines and Points
Some graphic devices support polygon drawing primitive approach. They can directly
draw the polygon shapes. On such devices, polygons are saved as a unit. Some
graphic devices support trapezoid primitive. In such devices, trapezoids are formed
from two scan lines and two line segments as shown in figure. Here trapezoids are
drawn by stepping down the line segments with two vector generators and for, each
line, filling all the pixels between them. Therefore every polygon is broken up into
trapezoids and it is represented as a series of trapezoids.
Fig 3.3 Representation of Polygon
Many other graphic devices do not provide any polygon support at all. In such cases
lines and points represent polygons. A polygon is represented as a unit and stored in
the display file. In a display file, polygon cannot be stored only with the series of line
commands because they do not specify how many of the following line commands
are part of the polygon. Therefore a new command is used in the display file to
represent polygons. The opcode for new command itself specifies the number of line
segments in the polygon. The following figure shows polygon and its representation.
34
Fig 3.4 Polygon and its Representation using Display File
Entering Polygons:
The polygon can be drawn by moving the pen to the first vertex (x1, y1) and then
moving to second vertex and so on, up to the nth vertex. Finally, we have to draw a
line from last vertex to the first vertex.
(x3,y3)
(x4,y4) Line
Move (x2,y2)
(x1,y1) Fig 3.5 Polygon Drawing
The information required to specify the polygon is the number of sides and the
coordinates of the vertex points.
3.3 INSIDE OUTSIDE TEST:
Once the polygon is entered in the display file, we can draw outline of the polygon.
To show polygon as a solid object, we have to brighten all its inside pixels. Now the
question is how to determine whether a pixel is inside or outside the polygon.
There are two methods of finding out whether a pixel is inside the polygon or outside
the polygon: 1) Even-Odd method and 2) Winding Number method
Even-Odd Method:
To test weather a pixel is inside or outside the polygon, construct a line segment
between the point in question and a point known to be outside the polygon.
To find a point outside the polygon, select a point with x coordinate smaller than the
smallest x coordinate of the polygon’s vertices. Now count how many intersections of
35
the line segments with the polygon boundary occur. If there are odd number of
intersections then the point in question is inside, other wise it is outside.
Fig 3.6 Even – Odd Test
If the intersection point is vertex of the polygon, then we have to look at the other
end points of two segments, which meet at this vertex. If these points lie on the same
side of the constructed line then the point in question counts as an even number of
intersections. If they lie on opposite sides of the constructed line then the point is
counted as single intersection (odd).
Fig 3.7 Count of vertex Intersections
Winding Number Method:
In this method, we stretch a piece of elastic between the point in question and a
point on the polygon boundary. The end attached to the polygon is slided along the
boundary until it has made one complete round. Then examine the point in question
to see how many times the elastic has wound around it. If it has wound at least once,
then the point is inside. If there is no net winding then the point is outside. The
winding number for a point is calculated by picturing a line segment running from
outside the polygon to the point in question and considers the polygon sides, which it
crosses.
36
Fig 3.9 Calculating winding Numbers
3.4 Polygon filling:
Highlighting all inside pixels of a polygon is called polygon filling. There are two basic
approaches used to fill the polygon: Seed filling and Scan line filling.
3.4.1 Seed Filling (Area or Region filling):
In this method, to fill a polygon is to start from a given known point (called seed
point) and highlight outward from this point (i.e. brightening neighboring pixels) until
we encounter the boundary pixels. The seed fill algorithms are classified as:
1) Flood fill algorithm and 2) Boundary fill or Edge fill
Flood fill: Some times it is required to fill in an area that is not defined with in a
single color boundary. In such cases we can fill areas by replacing a specified interior
color instead of searching for a boundary color. This approach is called flood fill
algorithm.
Algorithm:
Void floodfill (int x, int y, int fillcolor, int oldcolor)
{ if(getpixel(x,y)==oldcolor)
{ setcolor(fillcolor);
setpixel(x,y);
Floodfill(x+1,y,fillcolor,oldcolor);
Floodfill(x-1,y,fillcolor,oldcolor);
Floodfill(x,y+1,fillcolor,oldcolor);
Floodfill(x,y-1,fillcolor,oldcolor);
}
}
Boundary fill: In boundary fill algorithm filling, we start at a point inside the interior
of the polygon and move toward the boundary. Boundary defined regions may be
either 4 connected or 8 connected.
37
(a) Start position (b)
Fig 3.10 area defined within the color boundary (a) is partially filled in (b) using a 4-connected
Boundary-fill algorithm.
Algorithm:
Void Boundaryfill (int x, int y, int fill, int boundary)
{ int current;
current=getpixel(x,y);
if((current!=boundary)&&(current!=fill))
{ setcolor(fill);
setpixel(x,y);
Boundaryfill(x+1,y,fill,boundary);
Boundaryfill(x-1,y,fill,boundary);
Boundaryfill(x,y+1,fill,boundary);
Boundaryfill(x,y-1,fill,boundary);
}
}
3.4.2 Scan Line Filling:
Seed fill algorithms are well suited for small polygons; it cannot be used for large
polygons because recursive procedures require stacking for neighboring points. Thus
stack space may not be sufficient. Another difficulty with seed filing algorithms is that
if some inside pixels are displayed in fill color, their recursive branch terminates,
leaving further internal pixels unfilled. To overcome these difficulties, we consider
more efficient method, which fills horizontal pixel spans across scan lines, instead of
proceeding to 4-connected neighboring points. For each scan line crossing a polygon,
this algorithm locates the intersection points of the scan line with the polygon edges.
These intersection points are then sorted from left to right, and the corresponding
positions between each intersection pair are set to the specified fill color. The scan
line algorithm first finds the largest and smallest y value of the polygon. It then starts
with largest y value and works its way down, scanning from left to right.
Scan line
38
Scan line
(a) (b)
Fig 3.11 Filling of interior pixels along the scan line
The main task in scan line algorithm is to find the intersection points of the scan line
with polygon boundary. When intersection points are even, they are sorted from left
to right, paired and pixels between paired points are set to fill color. When a scan line
intersects a polygon vertex, we must look at the other end points of the two line
segments of the polygon, which meet at this vertex. If these points lie on the same
side of scan line, then the point in question counts as an even number of
intersections. If they lie on opposite sides of the scan line, then the point is counted
as single intersection. This is shown the following figure.
D B
1 4 scan line 1 2 C G A 1 2 3 4 scan line2 1 E scan line3 2 3 4 F
I H Fig 3.12 Intersection points along the scan line
Scan line 1 generates an even number of intersections i.e. is passing through the
vertex of the polygon, then the other end points of that vertex are same side of the
scan line, so that intersection is even count. That is 1, 2, 3, 4.These intersections can
be paired to correctly identify interior pixels. Scanline 2 also generates even number
of intersections, and paired as1->2 and 3->4. Scan line 3 also passing through vertex
of the polygon, but other endpoints are both the sides of the scan line. So it is only
single count. The pairs are 1->2 and 3->4.
39
Now we have to calculate x-intersect6ion points for scanline with every polygon side.
We can simplify these calculations by using coherence properties. A coherence
property of a scene with other part of the scene. By which we can relate one part of a
scene with other part of the scene. By using this property we can determine the x
intersection value on the lower scanline if x intersection value for current scan line is
known. This is given as,
yi+1-yi
m = xi+1-xi
xi+1 =xi–(1/m)
As we can scan from top to bottom, value of y coordinates between the two scan
lines changes by one unit.
yi+1 = yi-1
B
A G C Scan line
E F
I H Fig 3.13 Many times it is not necessary to compute x-intersections for scanline with every
polygon side. We need to consider only the polygon sides, which intersects current
scanline as shown above figure. It will easier to identify which polygon sides should
be tested x-intersection, if we first sort the sides in order of their maximum y value.
Once the sides are sorted we can process the scanline from top of the polygon to its
bottom producing an active edge list for each scan line crossing the polygon
boundaries. The active edge list for a scanline contains all edges crossed by that
scanline. The following figure shows sorted edges of the polygon with active edges.
C B
G
40
D
A
E Scanline3
F
I H
DC DE BC BA AI GF GH EF IH
Top Bottom Active edges Fig 3.14 Sorted edges of the polygon with active edges.
Algorithm:
1. Read the number of vertices.
2. Store vertices of a polygon in the array x[n] and y[n].
3. Find largest y value as ymax and lowest y value as ymin.
4. Store the initial values x1,y1 and final values x2,y2 for two endpoints and x increment x for
each edge in the array, edges [n][4]. While doing this, check that y1>y2; if not interchange
y1 and y2 and corresponding x1 and x2 so that for each edge, y1 represents its maximum y-
coordinate and y2 represents its minimum y-coordinate.
5. Sort the rows of array, edges [n][4] in descending order of y1, descending order of y2 and
ascending order of x2.
6. Assign ymax to y (y=ymax).
7. If (y>y2) and (yy1) then, edge is active
Otherwise, edge is not active.
8. Compute the x-intersects for all active edges for current y-value. The x-intersects
for successive y-value can be given as, xi+1 = xi+x
Where, xi is the initial x-intersect y2-y2 x = -1/m and m= x2-x1
41
CHAPTER 4: 2D TRANSFORMATIONSA computer graphics system allows the programmer to define objects and alter or
manipulate the objects. This simulated manipulation of objects is referred to as
transformation. Transformation allows us to uniformly alter the entire picture.
4.1 Transformations:
The basic transformations are: Translation, Scaling, and Rotation.
Other transformations often applied to objects, are: Reflection and shear
Translation:
Translation is a process of changing the position of an object. Let P be a point with
the coordinates (x, y). We wish to move this point A to new position say P1, having
the coordinates (x1, y1).
Fig 4.1
In the above figure, P has been moved by some quantity in x-direction as well as y-
direction. Let tx, ty be the quantities by which we have moved in x and y directions
respectively. Thus we can write,
x1 = x + tx
y1 = y + ty
42
The object position is changed according to translation values tx and ty. We can
arrange transformation matrix elements to get displacement according to tx and ty
values as,
1 0 0
T= 0 1 0
tx ty 1
For the translation of above point P, we can write it as ,
1 0 0
[x1, y1, 1] = [x,y,1] 0 1 0
tx ty 1
Scaling:
Scaling is the process of expanding or compressing the dimensions of an object.
Constants Sx and Sy are used to describe changes in length with respect to the x-
direction and y-direction respectively. A scaling constant greater than one indicates
expansion of size and less than one indicates compression of size.
Fig 4.2 Scaling
Let P1 be a point with coordinates (x1, y1) and that Sx, Sy be the scaling parameters in
x and y directions respectively. We can obtain the coordinate of the scaled object P2
as, x2 = x1* Sx
y2 = y1* Sy
The new point P2 is calculated by multiplying P1 with transformation matrix T,
P2 = P1 * T
We can represent this in a matrix form as,
P1 = [x1, y1]
P2 = [x2, y2]
Assume transformation matrix as unit matrix of size 2×2, then
[x2,y2] = [x1,y1] 1 0
0 1
43
[x2, y2]= [x1, y1] * T
There is no change for the point.
Again, [x2, y2] = [x1, y1] 2 0
0 1
[x2,y2] = [2x1, y1]
Now, x-direction length is doubled and no change in y-direction.
Consider [x2,y2] = [x1,y1] 1 0
0 0.5
[x2, y2] = [x1+,0.5y1]
No change in x-direction, but y-direction length is reduced to half.
Now, [x2,y2] = [x1, y1] 2 0
0 2
[x2,y2] = [2x1,2y1]
Both x and y direction lengths are doubled.
By considering the above observations we can construct a scaling transformation matrix as,
Sx 0S = 0 SY
Any positive numeric value is valid for the scaling factors Sx and Sy. Values less than 1
reduce the size of the image and values greater than 1 produces an enlarged image.
For both Sx and Sy equal to 1, the size of image does not change.
Rotation Transformation:
We need to rotate the object to view from a different angle on the screen. Rotation
can be applied with respect to origin and with respect to any arbitrary point.
Fig 4.3 Rotation with respect to Origin.
Rotation transformation matrix with respect to origin in anti-clockwise direction is,
R = cosθ sinθ
-sinθ cosθ
44
Fig 4.4 Rotation of line segment with angle θ
4.2 HOMOGENEOUS COORDINATES AND TRANSLATION:
Moving an image on the display surface is called as translation. It can be done by
adding to each point a value, by which amount we need to shift the image. In
general, to translate an image to the right and up by (tx,ty), every point (x1,y1) is
replaced by a new point (x2,y2) where,
x2=x1+tx, y2=y1+ty
In order to represent scaling, rotation and translation in a single matrix, we need
homogeneous coordinates, which use 3x3 matrices instead of 2x2 matrices. Points
are specified by three coordinates, instead of two by introducing a third and
dummy coordinate w. We represent homogeneous as (xw,yw,w). The (x,y)
coordinates can then be obtained by dividing first and second parameters by w.
In homogeneous coordinates, the scaling translation matrix is,
1 0 0
(xw,yw,w) 0 1 0 = [ ((xw+txw) (yw+tyw) w) ]
tx ty 1
Here dividing by third coordinate w, the point reduces to,
[x+tx y+ty 1]
Homogeneous rotation with respect to origin in counter clockwise direction is,
5 A triangle is a_________type of polygon. [ ]A)hybrid B)concaveC)convex D)none
6 Even-Odd and Winding number methods always produce same results.
[ ]
A) true B)falseC)cannot say D)none
7 Write circle equation with center (Xc,Yc) and radius r [ ]A)(X-Xc)2+(Y-Yc)2=r2 B)(X-Xc)2+r2=(Y-Yc)2
C)(Y-Tc)2+r2=(X-Xc)2 D)none8 _________method creates characters out of series of line
segments.[ ]
A)stroke B)bitmapC)aliasing D)none
9 By considering the symmetry of the circles computation can be _______.
[ ]
A)reduced B)increasedC)remains constant D)none
10 Lines have a _______appearance when they step from one row or column to the next.
[ ]
A)jagged B)straightC)curved D)none
11 A__________ is a polygon in which line segment joining any two points with in the polygon lies completely inside the polygon.
[ ]
A)convex B)concave
C)both D)none12
A ____________is a polygon in which the line segment joining any two points with in the polygon may not lie completely inside the polygon.
[ ]
A)single layer phosphor B)multi layer phosphor
c)RGB color mixing D)none13
Beam penetration CRT generate color displays by using__. [ ]A)polygon drawing primitive B)trapezoid primitive
C)line and points D)none
14 What are the seed fill algorithm[ ]
A)boundary fill B)flood fill C)both D)none
15 The aspect ratio of monitor is defined as [ ]A)X:Y B)Y:X
139
C)X*Y D)none
16 A lossy image compression algorithm will ___________the no. of pixels in original image.
[ ]
A)increase B)decreaseC)not alter D)none
17 Using Halftoning, the no.of intensity levels possible for a square of size ‘n’ are
[ ]
A)2n B)2n C)2n+1 D)2n+1
18 _________displays are devices that convert electrical energy into light.
[ ]
A)Emissive B)non emissive
C)both D)none
19 The point half way between x-axis and the point half way between y-axis is the _________.
[ ]
A)line B)mid pointC)point D)none
20 ___________is a smallest addressable point on the screen [ ]
A)pixel B)frame buffer
C)both D)none
UNIT–21 The method for selecting and enlarging portions of a drawing is
called______________[ ]
A) Clipping B) WindowingC) Viewport D) None
2 The Technique of not showing that part of the drawing which one is not interested in is called __________________
[ ]
A) Clipping B) WindowingC) Viewport D) None
3 An area on a display device to which a window is mapped is called ________
[ ]
A) Clipping B) TransformationC) Viewport D) None
140
4 Any procedure that identifies portions of a picture that are either inside or outside of a specified region of space is called _______
[ ]
A) Clipping B) TransformationC) Viewport D) None
5 Which of the following is not a primitive type of clipping? [ ]A) Line Clipping B) Polygon ClippingC) Curve Clipping D) None
6 Which of the following is a line clipping algorithm? [ ]A) Cohen - Sutherland B) Sutherland - HodgemanC) Point Source D) None
7 Which of the following is a Polygon clipping algorithm? [ ]A) Cohen - Sutherland B) Sutherland - HodgemanC) Point Source D) None
8 In Cohen – Sutherland algorithm every line end poing in a picture is assigned a four digit binary code called as ___________________
[ ]
A) Line Code B) Region CodeC) Window Code D) None
9 An area on a display device to which a window is mapped is called __________
[ ]
A) Viewport B) SegmentC) Window D) None
10 The region in which an object is clipped is called________________ [ ]A) Clip Window B) View portC) Visible Segment D) None
11 Undoing the effect of transformation is done by___________transformation
[ ]
A)shear B)scaleC)translation D)inverse
12 Reflection of a point (x,y)about the origin transforms the point as [ ]A)(y,x) B)(-x,-y) C)(-y,-x) D)(x,y)
13 _______ is a process of changing the position of an object. [ ]A)translation B)rotationC)scaling D)all
14 ________ is the process of expanding or compressing the dimensions of an object.
[ ]
A)translation B)rotationC)scaling D)all
15 Rotation about an arbitrary point , the following steps are
done.
[ ]
A)T1*R*T2 B)T1*S*T2
C)T1*R*R D)T1*T2*r
16 A ______is a transformation that produces a mirror image of an object. [ ]
A)shear B)reflection
C)both D)none
141
17 A Transformation that slants the shape of an object is
called the_______.
[ ]
A)shear B)reflection
C)both D)none
18 Increasing or decreasing the magnification of the
displayed objects is known as______.
[ ]
A)shear B)reflection
C)zooming D)none
19 Zooming can be performed dynamically using the _____. [ ]
A)keyboard B)mouse
C)both D)none
20 The_____ shear preserves the x coordinates but changes
the y values.
[ ]
A)y B)x
C)xy D)none
UNIT-31 The technique of transforming a three dimensional object into a
two dimensional screen is called as ____________________[ ]
A) Projection B) TransformationC) Rotation D) None
2 Which of the following is not a 3D primitive? [ ]A) MOVE_ABS_3(x,y,z) B) MOVE_REL_3(dx,dy,dz)C)POLYGON_REL_3(ax,ay,az,n) D) None
3 Which of the following not a basic 3D transformation? [ ]A) Translation B) ScalingC) Rotation D) None
4 If the direction of projection is not parallel to the view plane normal then it is called as ________________________
[ ]
A) Cavilier Projection B) Oblique ProjectionC) Cabinet Projection D) None
5 A projection in which all the three principal axes are equally shortened is called __________________________________
[ ]
A) Isometric B) TrimetricC) Dimetric D) None
142
6 A projection in which none of the three edge directions is equally shortened is called ______________________
[ ]
A) Isometric B) TrimetricC) Dimetric D) None
7 Which of the following algorithm uses both image space and object space operations?
[ ]
A)Back Face removal algorithm B) Z-Buffer algorithmC) Warnock’s Algorithm D) None
8 Which of the following algorithm uses both image space and object space operations
[ ]
A) Scan Line algorithm B) Painter’s algorithmC) Z – Buffer algorithm D) None
9 The technique of linearly interpolation intensity values across the polygon surface is called_____________________
[ ]
A) Interpolated shading B) Gouraud shadingC)constant shading D) None
10 A parallel projection preserves _______________of objects [ ]A) Elevation B) relative proportionC) orthogonal proportion D) None
11 Orthogonal projections are often used to produce the front side and________ of an object
[ ]
A) plane view B) top viewC) bottom view D) None
12 The projection often used in drafting is called as _________________ [ ]A) Perspective Projection B) Parallel ProjectionC) Axonometric Projection D) None
13 _______________________Perspective projection occurs when one of the faces of a rectangular object is parallel to the view plane.
[ ]
A) One point B) Two pointC) Three point D) None
14 Z – buffer algorithm is also known as ______________________ [ ]A) Warnock’s B) Area SubdivisionC) Depth Buffer D) None
15 Painter’s algorithm is also known as _______________________ [ ]A) Area sub division B) Warnock’sC) Depth Sorting D) None
16 Surface rendering procedures are termed as __________ methods. [ ]A) Shading B) IlluminationC) Half toning D) None
17 A fast and simple method for rendering an object with polygon surface is _______________________
[ ]
A) Constant intensity shading B) Gouraud ShadingC) Interpolated shading D) None
18 The back face detection and removal algorithm is sufficient for ______objects.
[ ]
A)single concave B)single convexC)both D)none
19 In ________algorithm , sort the polygons foe display in back-to-front order.
[ ]
A)BSP B)painter’sC)Warnock’s D)none
20 For determining the relevancy and relative position of two polygons __ technique cn be used.
[ ]
A)minimax B)boxing testC)both D)none
143
UNIT-4
1 The convex polygon boundary that encloses a set of control points is called _____________________
[ ]
A) Spline Curve B) Convex hull C) Clip boundary D) None
2 The ________________ function tells, how hard the ith sample point is pulling the curve in the interpolation process.
[ ]
A) Control function B) Blending functionC) Parametric function D) None
3 An interpolating piecewise cubic polynomial with a specified tangent at each control point is called ___________________
[ ]
A) Hermite Spline B) Beizer SplineC) Continuous Spline D) None
4 A flexible strip used to produce a smooth curve through a designated set of points is called ____________________
[ ]
A) Control Point B) SplineC)Convex Hull D) None
5 A Spline curve is specified by giving a set of coordinate positions called _________________
[ ]
A) Isolating points B) Control pointsC) Blending function D) None
6 The general shape of the curve is indicated by __________________ [ ]A) Isolating points B) Control pointsC) Blending function D) None
7 A spline curve is defined, modified and manipulated with operations on _________________
[ ]
144
A) Isolating points B) Control pointsC) Blending function D) None
8 ____________splines are only dependent on its end point constraints [ ]A) Hermite B) BezierC)Integrated D) None
9 A Bezier curve can be specified with boundary conditions with characterizing matrix or with __________________________
[ ]
A) Isolating points B) Control pointsC) Blending function D) None
10 A very useful property of a Bezier curve is that it always passes through __________________ control points
[ ]
A) Middle B) First and LastC) All D) None
11 _____________________allow local control over the spline curve or surface.
[ ]
A) Hermite Spline B) B - SplineC) Bezier Spline D) None
12 ____________________Property of a Bezier curve eliminates erratic Oscillations of a polygon.
[ ]
A) Convex Hull B) Dependent of Control PointsC) Blending function D) None
13 A Bezier Curve is a polynomial of degree one less than the no.of ________used.
[ ]
A)Blending functions B) PlanesC) Control Points D) None
14 Cubic Bezier curves are generated with ____________________point. [ ]A) One control B) Two controlC) Three control D) Four control
15 A B-spline Curve with n+1 control points, the curve is described with _______________ blending functions.
[ ]
A) n B) n - 1C) n + 1 D) NoneC) SRAM D) None
16 __________________instruction fills a rectangular array of pixels with solid color
[ ]
A) FILL B) FLINEC)DRAV D) PIXBLT
17 Which of the following is a color model? [ ]A) YIQ B) CMYKC) Both (A) & (B) D) None
18 The method for explaining the properties of color within some particular context is called as _________________
[ ]
A) Color model B) Half toningC) Illumination D) None
19 In RGB color space red is measured as ______________ [ ]
A) X-axis B) Y-axisC) Z-axis D) None
20 In RGB color space blue is measured as_____________ [ ]A) X-axis B) Y-axisC) Z-axis D) None
145
UNIT –51 Starting and ending frame number and an action that is to be
taken place describe each event in the-____________ animation programming language.
[ ]
A)Linear list notations B)General-purpose languages
C)Graphical languages D)none2 In____________ language, the values of variables in the language
can be used as parameters to whatever routine actually generate animation.
[ ]
A)Linear list notations B)General-purpose languages
C)Graphical languages D)none3 ._____________ languages are used for expressing, editing and
comprehending the simultaneous changes taking place in an animation.
[ ]
A)Linear list notations B)General-purpose languages
C)Graphical languages D)none4 ._____________level of animation is used to introduce the physical
properties of an object by distortions of shape.
[ ]
A)Squash and Stretch B) Slow-in and Slow-out
C)both D)none5 .__________level of animation is used for smooth interpolation. [ ]
A)Squash and Stretch B) Slow-in and Slow-out
C)both D)none6 The problems peculiar to animation are______. [ ]
A)temporal aliasing B)Rendering.
146
C)both D)none7 The process of producing realistic images or pictures is called
rendering.[ ]
A)rendering B)aliasingC)both D)none
8 The central dense, black, sharply defined shadow area is__________.
[ ]
A)umbra B)penumbra C)both D)none
9 The position of the observer and the light source separate ________ appears.
[ ]
A)transparent B)shadow C)both D)none
10 .________ An example to transparent material. [ ]A)glass B)waterC)both D)none
11 _________plane are self hidden planes when the scene is viewed from the position of the light source.
[ ]
A)Self-Shadow B)projected shadows C)both D)none
12 In computer graphics, the surface detail in an object is called ________.
[ ]
A) texture B)shadow C)transparency D)none
13 _______transparency treats a polygon as a transparent filter that selectively passes different wavelengths.
[ ]
A)filtered B)interpolatedC)both D)none
14 __________ are specialized animation languages designed simply to generate the in-betweens from user specified key frames.
[ ]
A)Key frame B)ParameterizedC)both D)none
15 ___________systems allow object motion characteristics to be specified as
a part of the object definition.
[ ]
A)Key frame B)ParameterizedC)both D)none
16 In animation the sequence of pictures is called ______. [ ]
A)story board B)pixel
C)both D)none
17 The __________is ideally suited to control animation. [ ]
A)procedural control B)full explicit control
C)both D)none
18 An ______is in animation is a small programme invoked once per frame
to determine the characteristics of some objects in the animation
[ ]
A)actor B)tracking live action
C)both D)none
147
19 In 2D and 3D aliasing problems are partially solved by increasing
The_________.
[ ]
A)resolution B)aliasing
C)both D)none
20 ________is a process of creating moving objects. [ ]