Top Banner
CHAPTER 1: INTRODUCTION TO COMPUTER GRAPHICS 1.1. INTRODUCTION: The source and art of creating, storing, and manipulating pictures with the aid of computer is called computer graphics. Computer graphics allows communication through pictures, charts, and diagrams. Nowadays computer graphics is playing a key role in every application. Interactive Computer Graphics (active): In Interactive computer graphics, user controls the contents, structure and appearance of objects and their displayed images by using input devices such as keyboard, mouse or touch sensitive panel on the screen. Interactive computer graphics involves two-way communication. The computer receives signals from input device and modifies the displayed image independently. We can make pictures of not only real world objects, but also of abstract, mathematical surfaces in 3D and of data that have no inherent geometry. Non-interactive computer graphics (passive): In non-interactive computer graphics the user has no control on the contents, structure and appearance of objects and their displayed images. A major use of computer graphics is in design, particularly for engineering and architectural systems. Computer aided design is a segment in computer graphics which helps in creation, modification or optimization of a design or product such as buildings, automobiles, aircraft, water craft, space craft, textiles etc. Applications: Computer graphics is used in various fields of science, engineering, medicine, research, entertainment, advertisement. Some of the major application areas of computer graphics are listed below: (i) Construction: Computer graphics is a very useful tool for generating architect’s drawings and visualizing structures. Taking data about buildings, the computer can model various images of a building from different angles. Architects can use interactive graphics 1
193
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CG3

CHAPTER 1: INTRODUCTION TO COMPUTER GRAPHICS

1.1. INTRODUCTION:

The source and art of creating, storing, and manipulating pictures with the aid of

computer is called computer graphics. Computer graphics allows communication

through pictures, charts, and diagrams. Nowadays computer graphics is playing a

key role in every application.

Interactive Computer Graphics (active): In Interactive computer graphics, user

controls the contents, structure and appearance of objects and their displayed

images by using input devices such as keyboard, mouse or touch sensitive panel on

the screen. Interactive computer graphics involves two-way communication. The

computer receives signals from input device and modifies the displayed image

independently. We can make pictures of not only real world objects, but also of

abstract, mathematical surfaces in 3D and of data that have no inherent geometry.

Non-interactive computer graphics (passive): In non-interactive computer

graphics the user has no control on the contents, structure and appearance of

objects and their displayed images. A major use of computer graphics is in design,

particularly for engineering and architectural systems. Computer aided design is a

segment in computer graphics which helps in creation, modification or optimization

of a design or product such as buildings, automobiles, aircraft, water craft, space

craft, textiles etc.

Applications: Computer graphics is used in various fields of science, engineering,

medicine, research, entertainment, advertisement. Some of the major application

areas of computer graphics are listed below:

(i) Construction: Computer graphics is a very useful tool for generating

architect’s drawings and visualizing structures. Taking data about buildings, the

computer can model various images of a building from different angles. Architects

can use interactive graphics procedures to layout floor plans such as positioning of

rooms, doors, windows and other building features.

(ii) Electronic design: Electronic devices like radio, television, calculators and

computers are made up of many small electronic components, mounted on printed

circuit boards. Using interactive computer graphics system, engineers can draw the

circuit in less time. The computer can help in checking the design and making

modifications to the design in less time.

(iii) Mechanical Design: Computer Graphics is used to design systems of

mechanical devices. Before manufacturing any machine, an engineer first designs

and simulates it on the computer.

1

Page 2: CG3

(iv) Entertainment and Animation: Computer graphics is playing main role in

making motion pictures and games. Computer game is an audiovisual engine capable

of keeping an internal model of some dynamic system. Games are based on

interaction between the user and the computer.

(v) Education and training: Computer graphics is used to produce pictures that

express a message and attract attention. Computer generated pictures are often

used as educational aids. For some training applications, special systems are

generated. Examples of specialized systems are simulators for practice sessions,

training of aircraft pilots, and training of ship captains. Finally, the slide production

for commercial, scientific and educational presentations is another use of graphics.

(vi) Medical Technology: Today, Computer Assisted Tomography (CAT) is used to

compose 3D-model of the brain by taking multiple X-rays. CAT image is very useful

in detecting various diseases like brain tumors. Computers are also useful to

generate images of delicate organs. It is useful to study the effect of drugs on

tissues. Medical applications also make extensive use of image processing

techniques for picture enhancements.

(vii) Cartography: Computer graphics is used to produce accurate and schematic

representation of geographical and other natural phenomena from measurement

data. Examples include geographic maps, exploration maps, for drilling and mining,

oceanographic charts, weather maps, and population density maps.

(viii) Business: It is one of the simplest and most widely used applications of

computer graphics. Graphs and charts are common requirement of any business

profile presentation. Using bar chart we can present comparison information easily by

taking data along the axes. The charts can be used to show percentage breakup of

individual component in an object.

1.2 GRAPHICS SYSTEM SOFTWARE (GRAPHICS STANDARDS):

Graphics software is a fundamental component of any graphics display system.

Systems with most expensive and sophisticated hardware devices would be useless

for many people if they did not have predominance and user-friendly software. In

1979 the Graphics Standard Committee (GSC), designed a Core Graphics System

(CORE). This system provided standardized set of commands to control the

construction and display of graphic images. These commands were independent of

the devices used to create the display, the image, and independent of the language

in which the graphics program was written. The CORE system contained mechanisms

for describing and displaying both 2D and 3D structures.

A second standard called Graphic Kernel System (GKS) was developed in Europe. GKS

has excellent two dimensional features. GKS did not contain the constructs for 3D

2

Page 3: CG3

objects. It introduced the concept of workstations, which allowed a single graphic

program to control several terminals.

Another graphics standard is the Programmer’s Hierarchical Interactive Graphic

Standard (PHIGS). It includes both the features of CORE and GKS. PHIGS also

supports a detailed database of structures that the programmer may edit selectively.

PHIGS presents a specification for graphics functions, it does not provide a standard

methodology for a graphics interface to output devices. It does not specify method

for transmitting and storing picture.

Two other graphics standards are the Computer Graphics Metafile (CGM) and the

Computer Graphics Interface (CGI). CGM is a file format for picture information that

allows device independent capture, storage and transfer. The CGI is a companion

standard which provides a procedural interface for CGM primitives.

1.3 INTRODUCTION TO GEOMETRY:

Points:

A point in a plane can be specified with an ordered pair of numbers (x, y), where x is the

horizontal distance from the origin and y is the vertical distance from the origin.

Fig 1.1

Lines:

Two points will specify a line. Lines are described by equations such that if a point (x, y)

satisfies the equations, then the point is on the line. If the two points used to specify a line

are (x1, y1) and (x2, y2), then the equation of the line is given by:

(y-y1)/(y2-y1) = (x-x1)/ (x2-x1)

The distance between any two points p1 and p2 with co-ordinates (x1, y1) and (x2, y2) can be

found using the formula, D = (x2-x1) 2+(y2-y1) 2 1/2

3

Page 4: CG3

Fig 1.2

Line Segment:

Sometimes in computer graphics we need to display only pieces of lines. Let’s consider only

those points on a line which lie between end points P1 and P2 is called line segments.

Slope:

In a coordinate system the description of a line involves an equation, which enables us to

find the coordinates of all those points, which make up the line. The fact that line is straight

is incorporated in the quantity called the slope (m) of the line.

Slope, m= tan θ

Fig 1.3

Where θ is angle formed by the line with the positive x-axis.

Midpoint: The point half way between x-axis and the point half way between y-axis is the

mid point.

4

Page 5: CG3

Fig 1.4

Vector:

A vector is represented by a line segment that all have the same length and magnitude. A

vector may be denoted (Dx, Dy), where Dx indicates how far to move along the x-axis

direction and Dy indicates how far to move along the y-axis direction.

Fig 1.5

Vectors have no fixed position in space. They tell us how far and what direction to move, but

they do not tell us where to start. The idea of vector is useful because it closely parallels the

manner in which a pen draws lines on paper or an electron beam draws lines on a cathode

ray tube. Two vectors may be added by adding their respective components.

V1+V2= [Dx1, Dy1]+[Dx2, Dy2]

= Dx1+Dx2, Dy1+Dy2

We can also multiply vectors by a number by multiplying each of its components.

n.V= n[Dx,Dy]

= [nDx,nDy]

This preserves the vectors direction but changes its magnitude. The vectors length gives a

measure of that magnitude.

|V| = (Dx2

+Dy2

) 1/2

If we multiply a vector by the reciprocal of its length, the result is a vector with length equal

to 1. Such a vector is called unit vector.

Pixels and Frame Buffers:

Pixel: In computer graphics, pixel is a short form of picture element. It is the smallest

addressable element on the screen. A pixel has only two color values, typically black and

white and it can be encoded by 1 bit of information. If more than 2 bits are used to

represents a pixel, a large number of colors can be represented.

5

Page 6: CG3

Fig 1.6 Pixel

Frame Buffer: The intensity values of all the pixels for a computer-modeled object are

placed into an array in the computer’s memory. This array, which contains an internal

representation of the image, is called a frame buffer. It collects and stores pixel values for

use by the display device.

Fig 1.7 A single bit Frame buffer raster CRT graphic device

1.4 VIDEO DISPLAY DEVICES:

The graphics capabilities of a computer system basically depend on the hardware

attached to it and also on the software support available. Engineering design

applications demand high quality graphic images. Additional devices are required to

produce such high quality images. Many graphic output devices are available now

with varying capabilities to display images and produce engineering drawings.

The most commonly used display technologies with personal computers are:

Cathode Ray Tube Displays (CRT displays)

Flat-Panel Displays

1.4.1 Cathode Ray Tube Displays:

Cathode Ray Tube:

Cathode Ray Tube has always been the predominant display device for interactive

computer graphics. CRT is reliable, inexpensive and capable of displaying rapidly

changing pictures. The main problems with CRT are the very high voltage it requires,

its bulkiness and its weight. In a cathode ray tube, there is a cathode, which when

heated emits a continuous stream of electrons. The beam of electrons passes

6

Page 7: CG3

through focusing and deflection systems that direct the beam towards specific

positions on the phosphor-coated screen. The phosphor then emits a small spot of

light at each position struck by the electron beam. The light output of phosphor falls

off rapidly after the electron beam has passed by, and a steady picture is maintained

by tracing it out rapidly and repeatedly. This process is called refresh and is

performed at least 30 times a second.

Electron Gun: The electron gun makes use of the electrostatic fields to focus and

accelerate the electron beam. A field is generated when two surfaces are raised to

different potential; electrons within the field tend to travel toward the surface with

more positive potential. The force attracting the electron is directly proportional to

the potential. Figure below illustrates the basic structure of a CRT. A CRT is an

evacuated glass tube, with a heating element at one end and a phosphor-coated

screen on the other end. When a current flows through this heating element (called

filament), the conductivity of the metal filament is reduced due to the high

temperature. This causes electrons to pile up on the filament, because they cannot

move as fast as they would like to. Some of these electrons actually boil off the

filament.

These free electrons are attracted towards a strong positive charge from the outer

surface of the focusing anode cylinder (sometimes called an electrostatic lens).

However, the inside of the cylinder has a weaker negative charge. Thus when the

electrons head toward the anode, they are forced into a beam and accelerated by the

repulsion of the inner cylinder walls. By the time the electrons get out they are going

so fast that they fly past the cathode they were heading for.

Next the electrons run into two sets of weakly charged deflection plates. These plates

have opposite charges, one positive the other negative. While their charge is not

strong enough to capture the fast moving electrons, they do influence the path of the

beam. The first set displaces the beam up and down (vertically), and the second

displaces the beam left and right (horizontally). The electrons are sent flying out of

the neck of the tube, called a yoke, until they pierce into the phosphor coating on the

other end of the tube. The impact of this collision on the out valence bands of the

phosphor compounds knocks some of the electrons to jump into another band. This

causes a few photons to be generated, and results in a spot on the CRT's face.

7

Page 8: CG3

Fig 1.8 Construction of a CRT

CRTs were embraced as output devices very early in the development of digital

computers. Most likely, oscilloscopes were some of the first computer graphics

displays. The results of computations could be used to directly drive the vertical and

horizontal deflection plates in order to draw lines on the CRT's surface. By varying

the current to the heating filament, the output of the electron beam could also be

controlled. This allowed the intensity of the lines to vary from bright to completely

dark. These early CRT displays were called vector, calligraphic or stroke displays. The

most commonly used cathode ray tube display technologies are:

Raster Scan Displays

Random Scan Displays

Raster scan displays: During the late 50s and early 60s, broadcast television,

began to take off. Televisions are basically oscilloscopes. The main difference is that

instead of having complete control over the vertical and horizontal deflection, a

television sweeps its trace across the entire face in a regular fixed pattern. This

scanning pattern proceeds from the top-left of the screen to the bottom-right as

shown in the diagram. The result is that the entire screen is painted every 1/30th 30th

of a second.

8

Page 9: CG3

Fig 1.9 Raster Display mechanism

Televisions were mass produced and inexpensive. For a computer to paint the entire

screen it needs only to synchronize it’s painting with the constant scanning pattern of

the raster. The solution to this problem was to add a special memory, which operated

synchronous to the raster scanning of the TV. But, while televisions were cheap,

memory wasn't. So there was a long period where the patterns were scanned out of

cheap high-density read-only memories, called character generators. The trick was to

use a single 8 bit code to specify an 8 by 12 character pattern from the ROM, and

with a few addressing tricks one could build a nice display (80 by 25 character) with

only 2 kilobytes of memory. Thus the era of the CRT-terminal was born.

There were a few attempts at building systems with downloadable or programmable

character generators. And a few systems added an extra byte to specify the

foreground and background colors of the character cell. Lots of tank/maze arcade

games in the 70's worked this way.

9

Page 10: CG3

Fig1.10 Raster scan Display System

Random-Scan Displays:

Random scan displays are used to draw straight lines. A straight line can be directly

drawn from any addressable point to any another addressable point. These devices

are also called Calligraphic (vector refresh) displays. Random-scan display CRT

directs electron beam only to the parts of the screen where a picture is to be drawn.

Refresh rate on random scan system depends on the number of lines to be displayed.

Picture definition is stored as a set of line drawing commands in an area of the

memory called as display file or refresh buffer. To display an image, the system

cycles through the set of commands in the display file. After all commands have been

processed, the system cycles back to the first drawing command in the list. Random

scan displays are designed to draw all commands of a picture 30 to 60 times per

second. These displays are suitable for line drawing applications like engineering &

scientific drawings. Raster scan displays are used to provide animation.

Fig 1.11a Random scan display system

Fig1.11b Block Diagram of Functions performed by Random Scan Systems

10

Page 11: CG3

Fig 1.11c Random Scan Drawing

Shadow mask CRT: color CRT's are more complicated than the simple monochrome

models summarized above. The phosphors on the face of a color CRT are laid out in a

precise geometric pattern. There are two primary variations, the stripe pattern of in-

line tubes shown on the left, and the delta pattern of delta tubes as shown on the

right. Within the neck of the CRT there are three electron guns, one each for red,

green, and blue (the actual beams are all invisible). There is also a special metal

plate just behind the phosphor cover front face, called a shadow mask. This mask is

aligned so that it simultaneously allows each electron beam to see only the

phosphors of its assigned color and blocks the phosphor of the remaining two colors.

Fig 1.13a stripe and delta patterns used in a shadow mask CRT

11

Page 12: CG3

Fig 1.13b Shadow mask CRT

A significant portion of the electron beam's energy strikes the mask rather than the

phosphors. This has two side effects. The shadow mask has to be extremely rigid to

stay aligned with the phosphor patterns on the CRT surface. The collision of electrons

with metal mask causes the mask to emit some of its absorbed energy as

electromagnetic radiation. Most of this energy is in the form of heat, but some

fraction is emitted as X-rays, which presents a health hazard. This wasn't a problem

for television because the intensity of the X-ray radiation falls off quickly as you move

away from the screen. However, computer monitors are supposed to be viewed from

a short distance. This health concern along with the high voltages and power

dissipations of CRTs has motivated the development of new display technologies.

Direct View Storage Tube (DVST):

In direct view storage tube display, the picture information is stored inside the CRT

instead of refreshing the screen. A line or character will remain visible until erased. A

Direct View Storage Tube (DVST) stores the picture information as charge distribution

just behind the phosphor coated screen. Two electron guns are used in a DVST. One

is primary gun used to store the picture pattern; other is the flood gun, which

maintains the picture display.

The beam is not directed on to the phosphor. The beam is to write on mesh wire grid

coated with dielectric and mounted just behind the screen. A pattern of positive

charge is deposited on the grid and this is transferred to the phosphor by continuous

flood of electrons issuing from a separate flood gun. Just behind the storage mesh is

a second grid, the collector whose main purpose is to smooth out flow of flood

electrons. These electrons pass through the collector at low velocity and are

attracted towards the positively charged portions of storage mesh. Electrons not

repelled by the storage mesh pass right through it and strike the phosphor.

12

Page 13: CG3

Fig.1.14 Direct View Storage Tube (DVST)

In DVST display, the images do not need refreshing. Using DVST, very complex

pictures can be displayed at high resolutions without flicker. The main drawback of

DVST display is that selected portions of a picture cannot be erased. To eliminate

picture selection the entire screen must be erased and the modified picture redrawn.

DVST displays do not display color.

Beam Penetration CRT: “NOTES TO BE WRITTEN”

1.4.2 Flat Panel Displays:

Flat Panel Display Devices take very less space, less weight and less power

requirements. Flat Panel Displays are used in small TV monitors, Calculators, Pocket

Video Games, Laptop Computers, palmtops etc. Flat Panel Displays are classified in

to two categories: Emissive Displays and Non-Emissive Displays.

Emissive displays:

Emissive displays are devices that convert electrical energy into light. Examples are:

Plasma panels.

Electroluminescent displays.

Light emitting diodes.

Plasma panels:

Plasma panels are also called gas-discharge displays. They are constructed by filling

the region between two glass plates by an inert gas. A series of horizontal electrodes

are placed on one glass panel and a set of vertical electrodes is placed in another

panel. The two sheets of glass are separated by a few thousands of an inch. By

applying voltage to the electrode, the gas is made to ionize and form into tiny cells. A

cell is made to glow by placing a firing voltage across the corresponding x and y

electrodes and the required image can be generated. As the gas within the cell

13

Page 14: CG3

discharges rapidly, a high-frequency alternating voltage is applied across the

electrodes, called as sustaining voltage. The cell glow can be switched off by lowering

the sustaining voltage. Thus plasma panel allows both selective writing and selective

erasure. It produces a very steady, flicker free image and is less bulky device when

compared to a CRT. Its main disadvantage is relatively poor resolution, abou6t 60

dots/inch, and its complex addressing and writing mechanisms.

Fig. 1.15 Plasma Panel Display

Electro luminescent displays:

Thin film electro luminescent displays are similar in construction to a plasma panel.

The difference is that the region between the glass plates is filled with a phosphor,

such as zinc sulfide doped with manganese instead of a gas. When sufficiently high

voltage is applied to a pair of crossing electrodes, the phosphor becomes a conductor

in the area of the intersection of two electrodes. Electro luminescent displays require

more power then plasma panels. Good color and gray scale displays are hard to

achieve.

Light emitting diodes:

Light emitting diode displays consists of diodes arranged in the form of a matrix.

Each diode at the intersection of row and column defines a pixel position. The

intensity values of pixels are stored in refresh buffer. Information is read from refresh

buffer and converted to voltage levels that are applied to the diode to produce the

light patterns in the display.

Non-emissive displays:

Non-emissive displays are using optical effects to convert sunlight or light from other

source into graphic patterns. Liquid crystal displays are considered in this category.

Liquid Crystal Displays (LCDs):

Liquid crystal display is a flat panel display technology, which makes it less bulky

than cathode ray tube technology. These displays are commonly used in Laptops,

calculators and other small systems. Liquid crystal displays produce a picture by

14

Page 15: CG3

passing polarized light from the surroundings or from an internal light source through

a liquid crystal material.

The LCD is made up of six layers. The front layer is vertical polarized plate. Second is

a layer with thin grid wires electro deposited on the surface adjoining the crystals.

Third is a thin liquid crystal layer with horizontal grid wires on the surface next to the

crystals, then a horizontal polarized and finally a reflector.

Fig. 1.16a Layers of a Liquid crystal display

Currently, the most popular alternative to the CRT is the Liquid Crystal Display (LCD).

LCDs are organic molecules that, in the absence of external forces, tend to align

themselves in crystalline structures. But, when an external force is applied they will

rearrange themselves as if they were a liquid. Some liquid crystals respond to heat,

others respond to electromagnetic forces.

When LCDs are used as optical (light) modulators, they are actually changing

polarization rather than transparency. In their unexcited or crystalline state, the LCDs

rotate the polarization of light by 90 degrees. In the presence of an electric field,

LCDs behave like a liquid and align the small electrostatic charges of the molecules

with the impinging E field.

15

Page 16: CG3

Fig 1.16b &c

The LCDs transition between crystalline and liquid states is a slow process. This has

both good and bad side effects. LCDs, like phosphors, remain "ON" for some time

after the E field is applied. Thus the image is persistent like CRT's, but this lasts just

until the crystals can realign themselves; thus they must be constantly refreshed, like

a CRT.

Fig 1.17 LCD Construction

Rather than generating light like CRTs, LCDs act as light values. Therefore, they are

dependent on some external light source. In the case of a transmissive display,

usually some sort of backlight is used. Reflective displays take advantage of the

ambient light. Thus, transmissive displays are difficult to see when they are

overwhelmed by external light sources, whereas reflective displays can't be seen in

the dark.

The LCDs themselves have extremely low power requirements. A very small electric

field is required to excite the crystals into their liquid state. Most of the energy used

by an LCD display system is due to the back lighting. In scanned displays, with a

large number of pixels, the percentage of the time that LCDs are excited is very

16

Page 17: CG3

small. Thus the crystals spend most of their time in intermediate states, being

neither “On” nor "Off". This behavior is indicative of passive displays. We might

notice that these displays are not very sharp and are prone to ghosting.

1.5 LOGICAL AND PHYSICAL INTERACTIVE DEVICES:

Input devices allow us to communicate with computer. Using these devices one can

feed in the information. The most commonly used input devices are:

Keyboard.

Mouse.

Digitizer.

Track ball.

Light pen.

Joystick.

Image scanners.

Touch panels.

Plotters.

Keyboard: The keyboard is primarily used for entering text. Keyboards can also be

provided with features to facilitate entry of screen coordinates, menu selections or

graphic functions.

An alphanumeric keyboard on a graphics system is used as a device for entering text

strings. Cursor-control keys and function keys allow user to enter frequently used

operations in a single key stroke, and can be used to select displayed objects or

coordinate positions by positioning the screen cursor. Other types of cursor

positioning devices, such as trackball or joystick, are included on some keyboards.

Additionally a numeric keypad is often included on the keyboard for fast entry of

numeric data.

For special applications, input to a graphics application may come from a set of

buttons, dials, or switches that select data values or customized graphics operations.

Mouse: Mouse is an integral part of the fancy graphical user interface of any

software application. There is a cursor in the shape of an arrow or cross hair always

associated with a mouse; wheels or rollers on the bottom of the mouse can be used

to record the amount and direction of movement. Another method for detecting the

mouse motion is with an optical sensor. For these systems, the mouse is moved over

a special mouse pad that has a grid of horizontal and vertical lines. The optical

sensor detects movement across the lines in the grid.

17

Page 18: CG3

Since a mouse can be picked up and put down at another position without change in

cursor movement, it is used for making relative changes in the position of the screen

cursor. One, two or three buttons are usually included on the top of the mouse for

signaling the execution of some operation, such as recording cursor position or

invoking a function.

Fig 1.18 Keyboard and Mouse

Digitizer: A digitizer can be used to input coordinate values in either two-

dimensional or three-dimensional space. Typically, a digitizer is used to scan over a

drawing or object and to input a set of discrete coordinate positions, which can be

joined with straight-line segment to approximate the curve or surface shapes. One

type of digitizer is the graphics tablet, which is used to input two-dimensional

coordinates by activating a hand cursor or stylus at selected positions on a flat

surface. A hand cursor contains cross hairs for sighting positions, while a stylus is a

pencil –shaped device that is pointed at positions on the tablet.

Many graphics tablets are constructed with a rectangular grid of wires embedded in

the tablet surface. Electromagnetic pulses are generated in sequence along the

wires, and an electric signal is induced in a wire coil in an activated stylus or hand

cursor to record a tablet position. Acoustic tablets use sound waves to detect a stylus

position. Either strip microphones or point microphones can be used to detect the

sound emitted by an electrical spark from a stylus tip. Three-dimensional digitizers

use sonic or electromagnetic transmissions to record positions.

Track ball: A Track ball is a ball that can be rotated with the fingers or palm of the

hand to produce screen cursor movement. Potentiometers attached to the ball

measure the amount and direction of rotation. Track ball is some sort of an inverted

mouse where the ball is held inside a rectangular box. In mouse, the positioning of

the cursor on the screen is associated with the orientation of the track ball.

18

Page 19: CG3

A track ball is a two-dimensional positioning device; a Space ball provides six degrees

of freedom. Unlike the track ball, a space ball does not actually move. Strain gauges

are used to pick up the pressure applied to the space ball to provide input for spatial

positioning and orientation as the ball is pushed or pulled in various directions. Space

balls are used for three-dimensional positioning and selection operations in virtual-

reality systems, modeling, animation, CAD and other applications.

Fig 1.19 The track ball

Light Pen: Light pen is a pointing device. It has a light sensitive tip, which is excited

by the light emitted, when an illuminated point on the screen comes in its field of

view. They are sensitive to the short burst of light emitted from the phosphor coating

at the instant the electron beam strikes a particular point. Other light sources, such

as the background light in the room, are usually not detected by a light pen. An

activated light pen, pointed at a spot on the screen as the electron beam lights up

that spot, generates an electrical pulse that causes the coordinate position of the

electron beam to be recorded. As with cursor-positioning devices, recorded light-pen

coordinates can be used to position an object or to select a processing option.

The light pen has several disadvantages compared to other input devices. When a

light is pointed at screen, part of the screen image is obscured by the hand and pen.

And prolonged use of the light pen can cause arm fatigue. Also, light pens require

special implementations for some applications because they cannot detect positions

within black areas. To be able to select positions in any screen area with a light pen,

we must have some nonzero intensity assigned to each screen pixel. In addition, light

pens sometimes give false readings due to background lighting in a room.

19

Page 20: CG3

Fig 1.20 Light pen

Joystick: A Joystick consists of a small, vertical level mounted on a base that is used

to steer the screen cursor around. Most joysticks select screen positions with actual

stick movement, others respond to pressure on the stick.

The distance that the stick is moved in any direction from its center position

corresponds to screen –cursor movement in that direction. Potentiometers mounted

at the base of the joystick measure the amount of movement, and springs return the

stick to the center position when it is released. One or more buttons can be

programmed to act as input switches to signal certain actions once a screen position

has been selected.

In another type of movable joystick, the stick is used to activate switches that cause

the screen cursor to move at a constant rate in the direction selected. Eight switches

arranged in a circle, are sometimes provided, so that the stick can select any one of

eight directions for cursor movement. Pressure-sensitive joysticks also called

isometric joysticks have a nonmovable stick. Pressure on the stick is measured with

strain gauges and converted to movement of the cursor in the direction specified.

Touch panels: Touch panels allow the screen positions to be selected by touching

the screen with the finger. A touch panel is a transparent plate that fits over the CRT

screen. A typical application of touch panels is for the selection of processing options

that are represented with graphical icons. Touch input can be recorded using optical,

electrical, or acoustical methods. Optical touch panels employ a line of infrared light–

emitting diodes along one vertical edge and along horizontal edge of the frame. The

opposite vertical and horizontal edges contain light detectors. These detectors are

used to record which beams are interrupted when the panel is touched. The two

crossing beams that are interrupted identify the horizontal and vertical coordinates of

the screen positions selected.

An electrical touch panel is constructed with two transparent plates separated by a

small distance. One of the plates is coated with a conducting material, and the other

20

Page 21: CG3

plate is coated with a resistive material. When the outer plate is touched, it is forced

into contact with the inner plate. This contact creates a voltage drop across the

resistive plate that is converted to the coordinate values of the selected screen

position.

In acoustic touch panels, high-frequency sound waves are generated in the horizontal

and vertical directions across a glass plate. Touching the screen causes part of each

wave to be reflected from the finger to the emitters. The screen position at the point

of contact is calculated from a measurement of the time interval between the

transmission of each wave and its reflection to the emitter.

Plotters: Plotters are used to produce large drawings. The most commonly

encountered plotters use ink pens to generate drawings, but many plotting devices

now employ laser beaming inkjet sprays and electrostatic methods.

Drafting layouts and other drawings are typically generated with ink-jet or pen

plotters. A pen plotter has one or more pens mounted on a carriage, or crossbar, that

spans a sheet of paper. Pens with varying colors and widths are used to produce a

variety of shadings and line styles. Wet-ink, ballpoint, and felt-tip pens are all

possible choices for use with a pen plotter. Plotter paper can lie flat or be rolled onto

a drum or belt. Crossbars can be either movable or stationary, while the pen moves

back and forth along the bar. A clamp, vacuum, or an electrostatic charge holds the

paper in position.

Fig 1.21 A desk top pen plotter Fig 1.22 Roll feed pen plotter

Image scanners: Image scanner is used to store drawings, graphics, pictures or text

for computer processing. Once we have internal representation of a picture, we can

apply transformations to rotate, scale or translation. We can also apply various

21

Page 22: CG3

image-processing methods to modify the array representation of the picture. For

scanned text input, various editing operations can be performed on the stored

documents. Some scanners are able to scan either graphical representations or text,

and they come in a variety of sizes and capabilities.

“ Detailed Notes on type of Scanners to be written” and diagrams

CHAPTER: 2 LINE AND CURVE DRAWING ALGORITHMS

2.1 CHARACTERISTICS OF A LINE DRAWING ALGORITHM

Lines should appear straight.

Lines should terminate accurately.

Lines should have equal brightness throughout their length.

Lines density should be independent of line length and angle.

Lines must be drawn rapidly.

The basic idea behind all the line drawing algorithms is to reduce the computations

and provide the result rapidly. There are two well-known classes of line drawing

algorithms:

1. Digital Differential Analyzer (DDA).

2. Bresenham’s Algorithm.

2.2 DIGITAL DIFFERENTIAL ANALYZER (DDA):

22

Page 23: CG3

This algorithm works on the principle of obtaining the successive pixel values based

on the differential equation governing that line. We know that for a straight line,

dy/dx = Constant

(i.e.) Δy /Δx =(y2-y1)/(x2-x1)

Hence we can obtain value of x or y by using its previous values. The commonly used DDA

techniques for straight-line scan conversion are:

Simple DDA.

Symmetric DDA.

Simple DDA:

In this method, we sample the line at unit intervals in one coordinate and determine

corresponding integer values nearest the line path for the other coordinate. Consider

a line with positive slope as shown in the figure2.1,

Fig 2.1 Positive Slope

If the slope is less than or equal to 1, we sample at unit x intervals (Δx=1) and

compute each successive y value as,

yi+1=yi + m .............................(1)

Subscript i takes integer values starting from 1, for the first point, and increases by 1 unit

until the end point is reached. Since m can be any real number 0 to 1, the calculated y

values must be rounded to nearest integer. For lines with a positive slope greater than 1, we

reverse the roles of x and y. That is we sample at unit y intervals (Δy=1) and calculate each

succeeding x value as,

xi+1=xi + 1/m ..............................(2)

The above calculations (1) & (2) are based on the assumption that lines are to be

processed from left end point to the right end point. If processing is reversed , so that

starting end point is at the right, then either we have Δx = -1 and,

yi+1= yi - m .........................………………..(3)

or (when slope is greater than 1), we have, Δy =-1 with

xi+1 = xi - 1/m .......................…………...(4)

These calculations (3) & (4) can also be used to calculate pixel positions along a line

with negative slope.

23

Page 24: CG3

If the absolute value of the slope is less than 1 and start end point is at the left, we

set Δx=1, and calculate y values with equation (1). When the right end point is at the

right (for the same slope), we set Δx=-1 and obtain the coordinate positions from

equation (3). Similarly, when the absolute value of negative slope is greater than 1,

we use Δy = -1 and equation (4) or we use Δy = 1 and equation (2).

Algorithm:

1. Read the two end points (x1, y1) (x2, y2).

2. Approximate the length of the line:

if abs(x2-x1) > abs(y2-y1) then,

length = (x2-x1)

else

length =abs (y2-y1)

3. Calculate incremental values from starting position to generate next pixel.

xincr = Δx /length

xincr = Δy /length

4. Assign initial values to x, y

x = x1

y = y1

5. Plot the pixel at (x, y) position on screen using

Plot (round (x), round (y))

6. Calculate the values of x and y for the next pixel position.

x =x + xincr

y =y + yincr

7. Plot the pixel at x, y position.

8. Repeat steps 6 and 7 until index reaches to length of the line.

The simple DDA is as accurate as its symmetrical counter part but generates a

different sequence of dots because of different method of estimating line length.

Logically, it is simpler, except for the need to perform an initial division to determine

the incrementing value. The simple DDA is ideal basis for a software line generator,

but the number of divisions makes it less suited to hardware implementation.

Symmetrical DDA:

The DDA works on the principle that we simultaneously increment x and y by small

steps proportional to the first derivates of x and y. In the case of straight line, the

first derivatives are constant and are proportional to Δx and Δy. Thus in the ideal

case of an infinite precision display, a line is generated by incrementing x and y by

Δx and Δy, where is some small quantity. In the real world of limited precision

24

Page 25: CG3

displays, we must generate only addressable points. This can be done by rounding to

the nearest integer after each incremental step. After rounding we display a dot at

the resultant x and y.

An alternative to round is the use of arithmetic overflow. x and y are kept in registers

that have two parts, integer and fractional. The incrementing values, which are both,

less than 1 are repeatedly added to the fractional part, and whenever result

overflows, the corresponding integer part is incremented. This would normally have

the effect of truncating rather than rounding so we initialize the DDA with the value

0.5 in each fractional part to achieve the true rounding. One advantage of this

arrangement is that it allows us to detect changes in x and y and avoids plotting the

same point twice.

Fig 2.2 Symmetrical DDA

2.3 BRESENHAM’S ALGORITHM:An accurate and efficient raster line-generating algorithm is developed by

Bresenham. To illustrate Bresenham’s approach, we first consider the scan

conversion process for lines with positive slope less than 1. Pixel positions along a

line path are then determined by sampling at unit x intervals. Starting from the left

end point (x0,y0) of a given line , we step to each successive column (x position) and

plot the pixel whose scan line y value is closest to the line path. Assuming we have

determined that the pixel at (xk, yk) is to be displayed, we next need to decide which

pixel to plot in column xk+1. Our choices are the pixels at positions (xk+1,yk),

(xk+1,yk+1). At sampling position xk+1, we label vertical pixel separations from the

mathematical line path as d1 and d2. The y coordinate on the mathematical line at

pixel column position is calculated as,

y = m(xk+1)+b

then d1=y-yk = m(xk+1)+b-yk

and d2=(yk+1)-y = yk+1-m(xk+1)-b

The difference between these two separations is, d1-d2=2m(xk+1)-2yk+2b-1. Here

we consider a decision parameter Pk for the kth step in the line algorithm as,

25

Page 26: CG3

Pk =x(d1-d2)

=2y.xk- 2x.yk + c

The (k+1)th step as, Pk+1=Pk+2y-2x(yk+1-yk)

the term yk+1- yk is either 0 or 1, depending on the sign of parameter Pk.

The first parameter P0 is evaluated at the starting pixel position as, P0=2y-x.

yk+1

y d1

yk d2

xk+1

Fig. 2.3 Distance between pixel positions and the line at sampling position xk+1

Bresenham’s Line-Drawing Algorithm for m <1:

1. Input the two line end points and store the left end point in (x0, y0).

2. Load (x0, y0) into the frame buffer; that is, plot the first point.

3. Calculate constants x, y, 2y and 2y-2x and obtain the starting value for the

decision parameter as P0=2y-x.

4.At each xk along the line, starting at k=0, perform the following test:

If Pk<0, the next point to plot is (xk+1,yk) and Pk+1=Pk+2y

Otherwise, the next point to plot is (xk+1,yk+1) and Pk+1=Pk+2y-2x

5.Repeat step 4 x times.

Fig2.4 Pixel positions along a line path

2.4 CIRCLE GENERATION ALGORITHM:

A circle is defined as the set of points that are at given distance r from a center

position (xc, yc). Basically the circle can be drawn using the symmetry property.

Suppose a calculated point is (x,y) then we can obtain 8 different points as,

26

Page 27: CG3

Fig 2.5 8-point Symmetry of A Circle

Bresenham’s Circle Drawing Algorithm:

This is the most efficient method of drawing the circle since,

It avoids the use of trigonometric and power functions

Performs the computations with integers and simple additions, subtractions

It works on the basic principle of obtaining the next pixel value by means of decision

variable and generates the circle with 8-point symmetry property. At each step, when

we obtain a new pixel point there are 3 possibilities:

The point may be on the circle

The point may be outside the circle

The point may be with in the circle

Let p1 be a point, which may be already plotted, correctly. The next computing may

yield a point p2 or p3. Now if we wish to get the point p3 on the circle, one may feel

that we must decrement the value of y but this is not so, because of the following

reason:

We have reached to the point p3 by incrementing the value in x from p1.Thus if we

decrement the value of y at p1 it means that the point p3 will be within the circle.

Hence if we are reaching to p3 within the circle, will be

increment value of x (at p1) and

decrement value of y (at p1)

So if a new pixel is going to be at p3, it implies point is going to be outside the circle.

Hence if a new computed point is going to be outside the circle, then increment the x

value and decrement the y value of previously plotted point.

27

Page 28: CG3

Fig. 2.6 A Circle

2.5 ELLIPSE GENERATION ALGORITHM:

The ellipse like the circle shows symmetry. However, the symmetry is 4-way rather

than 8-way. There are 2 methods of mathematically defining an ellipse.

Polynomial defining an ellipse

Trigonometric method of defining an ellipse

Fig. 2.7 4-point Symmetry of an Ellipse

Bresenham’s Ellipse Generation Algorithm:

This is an incremental method for scan converting an ellipse that is centered at the

origin in standard position. It works in a way that is very similar to the midpoint circle

algorithm. However, because of the 4-way symmetry property we need to consider

the entire elliptical curve in the first quadrant. We define a function f, which can be

used to decide if the midpoint between two candidate pixels is inside or outside the

ellipse.

f(x,y) = b2x2 + a2y2 - a2b2 = k (say)

if k<0 then (x,y) lies inside the ellipse

28

Page 29: CG3

if k=0 then (x,y) on the ellipse

if k>0 then (x,y) outside the ellipse

2.6 SCAN CONVERSION AND GENERATION OF DISPLAY:

In order to display the rasterized image using video techniques, it is necessary to

organize the picture into the precise pattern required by the graphic display. This is

process called scan conversion. In contrast to the display list for a random scan or

line drawing displays, which contains only information about lines or characters, in a

raster scan, the display list must contain organized information and presented at

appropriate frame raster. Three ways of accomplishing this are,

Real Time Scan Conversion.

Run length Encoding

Frame buffer memory

2.7 ALIASING AND ANTIALIASING:

Aliasing: Many raster displays allows only two pixel states (on/off). For these

displays, lines may have jagged or stair case appearance when they step from one

row or column to the next. This phenomenon is called aliasing. Fundamentally the

appearance of aliasing effect is due to the fact that lines, polygon edges and color

boundaries etc., are continuous where as a raster device is discrete. Aliasing is a fact

of any computer image, which depends on a grid of finite number of pixels in the

horizontal and vertical direction.

Fig.2.8 Aliasing Effect

Antialiasing:

Minimizing aliasing effect is called antialiasing. One approach to reduce aliasing

effect is increasing the resolution. Other generally used antialiasing techniques are:

Area Sampling

Super sampling

Pixel chasing

29

Page 30: CG3

In area sampling technique, we superimpose a pixel grid pattern onto the continuous

object definition. In super sampling method, we subdivide every pixel into sub pixels

and sub pixel is tested with the object to be scan converted. In pixel chasing method,

the graphic system is capable of shifting individual pixels.

2.8 IMAGE COMPRESSION:

Two Typical techniques for image compression are run-length encoding and area

image compression. Both take advantage of image coherence.

Run length Encoding:

Run length encoding seeks to take advantage of the fact that large areas of the

picture have the same intensity of color. In simplest form, run length encoding

specify only the number of successive pixels on a given scan line with in given

intensity. Fig 2.9 shows a simple monochrome (black and white) line drawing on a

3030 raster and the associated encoding for scan lines 1 and 30.The encoded data

is to be considered in groups of two. The first is the number of successive pixels on

that scan line with a given intensity, i.e., the count, and the second is the intensity.

Thus, in fig 2.9(a), scan line 1 has 30 pixels of zero intensity, i.e., white or the

background intensity. The complete picture is encoded with 208 numbers.

0

10

20

30 0 10 20 30

Fig. 2.9 (a)

0 1 30

10 7 1 8 0 1 5

20

30

10 20 30

Fig 2.9 (b) Run-length encoded image

30

Run length Intensity

Intensity Run length

0 30

0 30

0 7 1 1 0 0 0 6 5555 555

Page 31: CG3

Pixel by pixel storage, i.e., one piece of information for each pixel, requires 900

intensity values for the 3030 raster of fig 2.9(a). The data compression using run-

length encoding in this case is 1:4.33, or 0.231.

Solid figures are easily handled with run-length encoding. This is shown in fig 2.8(b),

along with the encoding for scan lines 1,15,30. Of particular interest is scan line 15.

For fig 2.9(b) the entire picture is encoded using 136 numbers for a data compression

of 1:6.62, or 0.151.

This simple run-length encoding scheme is easily extended to include color. For color,

the intensity of each of the red, blue and green color guns is given, preceded by the

number of successive pixels for that color on that scan line.

Run length Red intensity Green intensity Blue intensity

E.g. For simple color display in which each individual color gun is either completely

off (0) or fully on (1), the encoding for scan line 15 of fig 2.8 (b), with a yellow on a

blue background, is

15 0 0 1 7 1 1 0 8 0 0 1

Good run-length encoding schemes identify situations that result in negative data

compression and adaptively switch to pixel-by-pixel storage. A simple adaptive run-

length encoding scheme uses the sign of the number of encoded pixels to switch

between run-length encoding and pixel-by-pixel storage. Specifically, if the sign of

the count is negative, then the pixel-by-pixel data follows; otherwise run-length

encoded data follows. In both cases the magnitude of the count indicates the number

of pixel intensities, or triplets of RGB intensities, that follow. For example,

-3 1 0 0 1 0 1 0 0 1 6 1 2 6

Define three pixels with RGB values of (1,0,0), (0,1,0), (1,0,0), followed by six pixels

with RGB values of (1,2,6).

Area Image Compression:

Area image compression techniques attempt to take the advantage of area

coherence, i.e. the tendency of pixels in adjacent areas to have the same

characteristic. Area coherence fundamentally treats the image as 2D. Consider the

image in fixed-size pixel blocks taken in scan line order. Although pixel blocks can be

31

Page 32: CG3

of any reasonable size, e.g., 22,44,88,1616,3232, etc., we concentrate here

on 88 pixel blocks.

First we determine the average intensity value in each 88 block by adding all the

intensity values, dividing by 64 and truncating to obtain an integer number. The

average intensity is then subtracted from each pixel intensity to yield the differential

image. The run-length encoding scheme discussed in the previous section is now

applied to each row of the differential images.

.

2.9 HALF TONING:

Many displays and hard copy devices are bi-level. They can only produce two

intensity levels. In such displays or hard copy devices we can create an apparent

increase in the number of available intensities. This is achieved by incorporating

multiple pixel positions into the display of each intensity value. Then we view a very

small area and record only the overall intensity of the area. This phenomenon of

apparent increase in the number of available intensities by considering combined

intensity of multiple pixels is known as half toning. The half toning is commonly used

in printing black and white photographs in newspapers, magazines and books. The

pictures produced by half toning process are called half tones.

In computer graphics, half tone reproductions are approximated using rectangular

pixel regions; say 2 × 2 pixels or 3 × 3 pixels. These regions are called halftone

patterns or pixel patterns. The following figures show the halftone patterns to create

number of intensity levels.

Fig. 2.10a 2×2 Pixel Patterns for creating five Intensity Levels

Half toning technique can also be used to increase the number of intensity options on

systems that are capable of displaying more than two intensities per pixel. The

following figure shows 3×3 pixel patterns for creating 10 intensity levels.

32

Page 33: CG3

Fig.2.10b 3×3 Pixel Patterns for creating five Intensity Levels

CHAPTER 3: POLYGONS

3.1 INTRODUCTION:

A polygon is a figure with many sides. A polygon can be represented as a group of

connected edges, forming a closed figure. The line segments, which form the

boundary of a polygon, are called as edges or sides. The point of intersection of two

edges is called as vertex of the polygon. Polygons can be divided in to two types

namely, convex polygons and concave polygons.

Fig 3.1 Polygons

Convex Polygon: A Convex polygon is a polygon in which the line segment joining

any two points within the polygon lies completely inside the polygon.

Concave Polygon: A Concave polygon is a polygon in which the line segment

joining any two points within the polygon may not lie completely inside the polygon.

Regular polygons are those whose length of all edges is equal and internal/external

angle between any two connected edges are same. Irregular polygons are those

polygons that have length of edges different and the angles between any two

connected edges are same.

33

Page 34: CG3

Fig 3.2a Convex Polygons Fig 3.2b Concave Polygon

3.2 POLYGON REPRESENTATION:

The information required to specify the polygon is the number of sides and the

coordinates of the vertex points. To add polygon to our graphic system, we must

decide how to represent it. There are three approaches to represent polygons

according to the graphics system.

Polygon drawing primitive

Trapezoidal Primitive

Lines and Points

Some graphic devices support polygon drawing primitive approach. They can directly

draw the polygon shapes. On such devices, polygons are saved as a unit. Some

graphic devices support trapezoid primitive. In such devices, trapezoids are formed

from two scan lines and two line segments as shown in figure. Here trapezoids are

drawn by stepping down the line segments with two vector generators and for, each

line, filling all the pixels between them. Therefore every polygon is broken up into

trapezoids and it is represented as a series of trapezoids.

Fig 3.3 Representation of Polygon

Many other graphic devices do not provide any polygon support at all. In such cases

lines and points represent polygons. A polygon is represented as a unit and stored in

the display file. In a display file, polygon cannot be stored only with the series of line

commands because they do not specify how many of the following line commands

are part of the polygon. Therefore a new command is used in the display file to

represent polygons. The opcode for new command itself specifies the number of line

segments in the polygon. The following figure shows polygon and its representation.

34

Page 35: CG3

Fig 3.4 Polygon and its Representation using Display File

Entering Polygons:

The polygon can be drawn by moving the pen to the first vertex (x1, y1) and then

moving to second vertex and so on, up to the nth vertex. Finally, we have to draw a

line from last vertex to the first vertex.

(x3,y3)

(x4,y4) Line

Move (x2,y2)

(x1,y1) Fig 3.5 Polygon Drawing

The information required to specify the polygon is the number of sides and the

coordinates of the vertex points.

3.3 INSIDE OUTSIDE TEST:

Once the polygon is entered in the display file, we can draw outline of the polygon.

To show polygon as a solid object, we have to brighten all its inside pixels. Now the

question is how to determine whether a pixel is inside or outside the polygon.

There are two methods of finding out whether a pixel is inside the polygon or outside

the polygon: 1) Even-Odd method and 2) Winding Number method

Even-Odd Method:

To test weather a pixel is inside or outside the polygon, construct a line segment

between the point in question and a point known to be outside the polygon.

To find a point outside the polygon, select a point with x coordinate smaller than the

smallest x coordinate of the polygon’s vertices. Now count how many intersections of

35

Page 36: CG3

the line segments with the polygon boundary occur. If there are odd number of

intersections then the point in question is inside, other wise it is outside.

Fig 3.6 Even – Odd Test

If the intersection point is vertex of the polygon, then we have to look at the other

end points of two segments, which meet at this vertex. If these points lie on the same

side of the constructed line then the point in question counts as an even number of

intersections. If they lie on opposite sides of the constructed line then the point is

counted as single intersection (odd).

Fig 3.7 Count of vertex Intersections

Winding Number Method:

In this method, we stretch a piece of elastic between the point in question and a

point on the polygon boundary. The end attached to the polygon is slided along the

boundary until it has made one complete round. Then examine the point in question

to see how many times the elastic has wound around it. If it has wound at least once,

then the point is inside. If there is no net winding then the point is outside. The

winding number for a point is calculated by picturing a line segment running from

outside the polygon to the point in question and considers the polygon sides, which it

crosses.

36

Page 37: CG3

Fig 3.9 Calculating winding Numbers

3.4 Polygon filling:

Highlighting all inside pixels of a polygon is called polygon filling. There are two basic

approaches used to fill the polygon: Seed filling and Scan line filling.

3.4.1 Seed Filling (Area or Region filling):

In this method, to fill a polygon is to start from a given known point (called seed

point) and highlight outward from this point (i.e. brightening neighboring pixels) until

we encounter the boundary pixels. The seed fill algorithms are classified as:

1) Flood fill algorithm and 2) Boundary fill or Edge fill

Flood fill: Some times it is required to fill in an area that is not defined with in a

single color boundary. In such cases we can fill areas by replacing a specified interior

color instead of searching for a boundary color. This approach is called flood fill

algorithm.

Algorithm:

Void floodfill (int x, int y, int fillcolor, int oldcolor)

{ if(getpixel(x,y)==oldcolor)

{ setcolor(fillcolor);

setpixel(x,y);

Floodfill(x+1,y,fillcolor,oldcolor);

Floodfill(x-1,y,fillcolor,oldcolor);

Floodfill(x,y+1,fillcolor,oldcolor);

Floodfill(x,y-1,fillcolor,oldcolor);

}

}

Boundary fill: In boundary fill algorithm filling, we start at a point inside the interior

of the polygon and move toward the boundary. Boundary defined regions may be

either 4 connected or 8 connected.

37

Page 38: CG3

(a) Start position (b)

Fig 3.10 area defined within the color boundary (a) is partially filled in (b) using a 4-connected

Boundary-fill algorithm.

Algorithm:

Void Boundaryfill (int x, int y, int fill, int boundary)

{ int current;

current=getpixel(x,y);

if((current!=boundary)&&(current!=fill))

{ setcolor(fill);

setpixel(x,y);

Boundaryfill(x+1,y,fill,boundary);

Boundaryfill(x-1,y,fill,boundary);

Boundaryfill(x,y+1,fill,boundary);

Boundaryfill(x,y-1,fill,boundary);

}

}

3.4.2 Scan Line Filling:

Seed fill algorithms are well suited for small polygons; it cannot be used for large

polygons because recursive procedures require stacking for neighboring points. Thus

stack space may not be sufficient. Another difficulty with seed filing algorithms is that

if some inside pixels are displayed in fill color, their recursive branch terminates,

leaving further internal pixels unfilled. To overcome these difficulties, we consider

more efficient method, which fills horizontal pixel spans across scan lines, instead of

proceeding to 4-connected neighboring points. For each scan line crossing a polygon,

this algorithm locates the intersection points of the scan line with the polygon edges.

These intersection points are then sorted from left to right, and the corresponding

positions between each intersection pair are set to the specified fill color. The scan

line algorithm first finds the largest and smallest y value of the polygon. It then starts

with largest y value and works its way down, scanning from left to right.

Scan line

38

Page 39: CG3

Scan line

(a) (b)

Fig 3.11 Filling of interior pixels along the scan line

The main task in scan line algorithm is to find the intersection points of the scan line

with polygon boundary. When intersection points are even, they are sorted from left

to right, paired and pixels between paired points are set to fill color. When a scan line

intersects a polygon vertex, we must look at the other end points of the two line

segments of the polygon, which meet at this vertex. If these points lie on the same

side of scan line, then the point in question counts as an even number of

intersections. If they lie on opposite sides of the scan line, then the point is counted

as single intersection. This is shown the following figure.

D B

1 4 scan line 1 2 C G A 1 2 3 4 scan line2 1 E scan line3 2 3 4 F

I H Fig 3.12 Intersection points along the scan line

Scan line 1 generates an even number of intersections i.e. is passing through the

vertex of the polygon, then the other end points of that vertex are same side of the

scan line, so that intersection is even count. That is 1, 2, 3, 4.These intersections can

be paired to correctly identify interior pixels. Scanline 2 also generates even number

of intersections, and paired as1->2 and 3->4. Scan line 3 also passing through vertex

of the polygon, but other endpoints are both the sides of the scan line. So it is only

single count. The pairs are 1->2 and 3->4.

39

Page 40: CG3

Now we have to calculate x-intersect6ion points for scanline with every polygon side.

We can simplify these calculations by using coherence properties. A coherence

property of a scene with other part of the scene. By which we can relate one part of a

scene with other part of the scene. By using this property we can determine the x

intersection value on the lower scanline if x intersection value for current scan line is

known. This is given as,

yi+1-yi

m = xi+1-xi

xi+1 =xi–(1/m)

As we can scan from top to bottom, value of y coordinates between the two scan

lines changes by one unit.

yi+1 = yi-1

B

A G C Scan line

E F

I H Fig 3.13 Many times it is not necessary to compute x-intersections for scanline with every

polygon side. We need to consider only the polygon sides, which intersects current

scanline as shown above figure. It will easier to identify which polygon sides should

be tested x-intersection, if we first sort the sides in order of their maximum y value.

Once the sides are sorted we can process the scanline from top of the polygon to its

bottom producing an active edge list for each scan line crossing the polygon

boundaries. The active edge list for a scanline contains all edges crossed by that

scanline. The following figure shows sorted edges of the polygon with active edges.

C B

G

40

D

Page 41: CG3

A

E Scanline3

F

I H

DC DE BC BA AI GF GH EF IH

Top Bottom Active edges Fig 3.14 Sorted edges of the polygon with active edges.

Algorithm:

1. Read the number of vertices.

2. Store vertices of a polygon in the array x[n] and y[n].

3. Find largest y value as ymax and lowest y value as ymin.

4. Store the initial values x1,y1 and final values x2,y2 for two endpoints and x increment x for

each edge in the array, edges [n][4]. While doing this, check that y1>y2; if not interchange

y1 and y2 and corresponding x1 and x2 so that for each edge, y1 represents its maximum y-

coordinate and y2 represents its minimum y-coordinate.

5. Sort the rows of array, edges [n][4] in descending order of y1, descending order of y2 and

ascending order of x2.

6. Assign ymax to y (y=ymax).

7. If (y>y2) and (yy1) then, edge is active

Otherwise, edge is not active.

8. Compute the x-intersects for all active edges for current y-value. The x-intersects

for successive y-value can be given as, xi+1 = xi+x

Where, xi is the initial x-intersect y2-y2 x = -1/m and m= x2-x1

41

Page 42: CG3

CHAPTER 4: 2D TRANSFORMATIONSA computer graphics system allows the programmer to define objects and alter or

manipulate the objects. This simulated manipulation of objects is referred to as

transformation. Transformation allows us to uniformly alter the entire picture.

4.1 Transformations:

The basic transformations are: Translation, Scaling, and Rotation.

Other transformations often applied to objects, are: Reflection and shear

Translation:

Translation is a process of changing the position of an object. Let P be a point with

the coordinates (x, y). We wish to move this point A to new position say P1, having

the coordinates (x1, y1).

Fig 4.1

In the above figure, P has been moved by some quantity in x-direction as well as y-

direction. Let tx, ty be the quantities by which we have moved in x and y directions

respectively. Thus we can write,

x1 = x + tx

y1 = y + ty

42

Page 43: CG3

The object position is changed according to translation values tx and ty. We can

arrange transformation matrix elements to get displacement according to tx and ty

values as,

1 0 0

T= 0 1 0

tx ty 1

For the translation of above point P, we can write it as ,

1 0 0

[x1, y1, 1] = [x,y,1] 0 1 0

tx ty 1

Scaling:

Scaling is the process of expanding or compressing the dimensions of an object.

Constants Sx and Sy are used to describe changes in length with respect to the x-

direction and y-direction respectively. A scaling constant greater than one indicates

expansion of size and less than one indicates compression of size.

Fig 4.2 Scaling

Let P1 be a point with coordinates (x1, y1) and that Sx, Sy be the scaling parameters in

x and y directions respectively. We can obtain the coordinate of the scaled object P2

as, x2 = x1* Sx

y2 = y1* Sy

The new point P2 is calculated by multiplying P1 with transformation matrix T,

P2 = P1 * T

We can represent this in a matrix form as,

P1 = [x1, y1]

P2 = [x2, y2]

Assume transformation matrix as unit matrix of size 2×2, then

[x2,y2] = [x1,y1] 1 0

0 1

43

Page 44: CG3

[x2, y2]= [x1, y1] * T

There is no change for the point.

Again, [x2, y2] = [x1, y1] 2 0

0 1

[x2,y2] = [2x1, y1]

Now, x-direction length is doubled and no change in y-direction.

Consider [x2,y2] = [x1,y1] 1 0

0 0.5

[x2, y2] = [x1+,0.5y1]

No change in x-direction, but y-direction length is reduced to half.

Now, [x2,y2] = [x1, y1] 2 0

0 2

[x2,y2] = [2x1,2y1]

Both x and y direction lengths are doubled.

By considering the above observations we can construct a scaling transformation matrix as,

Sx 0S = 0 SY

Any positive numeric value is valid for the scaling factors Sx and Sy. Values less than 1

reduce the size of the image and values greater than 1 produces an enlarged image.

For both Sx and Sy equal to 1, the size of image does not change.

Rotation Transformation:

We need to rotate the object to view from a different angle on the screen. Rotation

can be applied with respect to origin and with respect to any arbitrary point.

Fig 4.3 Rotation with respect to Origin.

Rotation transformation matrix with respect to origin in anti-clockwise direction is,

R = cosθ sinθ

-sinθ cosθ

44

Page 45: CG3

Fig 4.4 Rotation of line segment with angle θ

4.2 HOMOGENEOUS COORDINATES AND TRANSLATION:

Moving an image on the display surface is called as translation. It can be done by

adding to each point a value, by which amount we need to shift the image. In

general, to translate an image to the right and up by (tx,ty), every point (x1,y1) is

replaced by a new point (x2,y2) where,

x2=x1+tx, y2=y1+ty

In order to represent scaling, rotation and translation in a single matrix, we need

homogeneous coordinates, which use 3x3 matrices instead of 2x2 matrices. Points

are specified by three coordinates, instead of two by introducing a third and

dummy coordinate w. We represent homogeneous as (xw,yw,w). The (x,y)

coordinates can then be obtained by dividing first and second parameters by w.

In homogeneous coordinates, the scaling translation matrix is,

1 0 0

(xw,yw,w) 0 1 0 = [ ((xw+txw) (yw+tyw) w) ]

tx ty 1

Here dividing by third coordinate w, the point reduces to,

[x+tx y+ty 1]

Homogeneous rotation with respect to origin in counter clockwise direction is,

cosθ sinθ 0

(xw, yw, w) -sinθ cosθ 0 = [(xwcosθ + ywsinθ) (xwsinθ- ywcosθ) w]

0 0 1

Dividing by third coordinate w, we get

[ x cosθ + y sinθ x sinθ-y cosθ 1 ]

Homogeneous scaling transformation matrix is,

Sx 0 0

(xw, yw, w) 0 Sy 0 = [ Sxxw Syyw w ]

0 0 1

Dividing by third coordinate h, we get,

[Sx x Syy 1]

Multiplying homogeneous vector by any of the matrices shown will always yield a

homogeneous vector. That is, a vector with a 1 at the end. We can discard 1 from

the result.

45

Page 46: CG3

4.3 ROTATION ABOUT AN ARBITRARY POINT:

To rotate an object about an arbitrary point (xa,ya), the following steps are performed:

1. Translate the arbitrary point (xa,ya) to the origin.

2. Rotate the object about the origin and

3. Finally, translate back the object to the original position

Applying these transformations sequentially, we get rotation transformation matrix

about arbitrary point. Matrix multiplication is not commutative (A.BB.A). Therefore

in obtaining composite transformation matrix, we must be careful to order the

matrices so that they correspond to the order of the transformation on the object.

First, translation matrix which moves point (xa,ya) to origin is,

1 0 0

T1 = 0 1 0

-x -y 1

Second, the transformation matrix for rotation about origin in counter clockwise

direction is,

cosθ sinθ 0

R = -sinθ cosθ 0

0 0 1

Third, transformation matrix to translate back to the original position is,

1 0 0

T2 = 0 1 0

x y 1

The composite transformation matrix is:

R = T1 * R *T2

Fig 4.5 Rotation about an arbitrary point

4.4 OTHER TRANSFORMATIONS:

Other transformations useful in certain applications are:

46

Page 47: CG3

Reflections.

Shear.

Reflections:

Reflection is a transformation that produces a mirror image of an object relative to

an axis of reflection. We can choose any axis of reflection in the x-y plane or

perpendicular to the x-y plane. Different types of reflections are:

Reflection about x-axis: In this, mirror image of given object is taken

with respect to x-axis as shown in the following figure.

Fig 4.6 Reflection of an object about X-axis

The transformation matrix for reflection about x-axis (x=x, y=-y) is:

1 0 0

Rx = 0 -1 0

0 0 1

Reflection about y-axis: In this mirror image of the given object is obtained as

shown in the figure:

Fig 4.7 Reflection of an Object about Y-axis

The transformation matrix for reflection about y-axis (x=-x, y=y) is:

-1 0 0

Ry= 0 1 0

0 0 1

Reflection about origin: In this, reflection takes place diagonally (x=-x, y=-y).

47

Page 48: CG3

Fig 4.8 Reflection of an Object about Origin

The transformation matrix is:

-1 0 0

Ro = 0 -1 0

0 0 1

Reflection about the line y=x : figure below shows reflection in the line y = x.

Fig 4.9 Reflection in the line y=x

Transformation matrix for reflection in the line y=x is derived as follows:

1. Rotate the line y=x about origin in clock wise direction by 450

2. Take reflection about x axis

3. Again rotate it in anti clockwise direction by the same angle

0 1 0

Ry=x = 1 0 0

0 0 1

Reflection about line y=-x: figure below shows reflection in the line y = -x.

Fig 4.10 Reflection in the line y=-x

Transformation matrix for reflection in the line y=-x is derived as follows:

48

Page 49: CG3

1. Rotate the line y=-x about origin in clock wise direction by 450

2. Take reflection about y axis

3. Again rotate about origin in Anti Clock wise direction by an angle 450

0 -1 0

Ry=-x = -1 0 0

0 0 1

Reflection about a line y=mx+c: It is done by the following steps.

Slope =m, y-intercept =c

We can relate slope m to angle θ by the equation,

m= tan θ, θ= tan-1 m,

Where θ is inclination of line about x-axis.

1. Translate the line by tx=0, ty=-c so that the line pass though the origin.

2. Rotate by an angle θ where tan θ = m about origin.

3. Take reflection about x or y axis depending upon the direction of rotation

4. Now again rotate in the reverse direction

5. Retranslate with tx=0 and ty=c

Ry=mx+c =T*R*Rx*R-1*T-1

Shear Transformations:

A transformation that slants the shape of an object is called the shearing

transformation. Shearing transformation makes the objects to distort their shape

along x axis or y-axis. Commonly used shearing transformations are,

X-shear

Y-shear

X-Shear:

The X-shear preserves the y coordinates but changes the x values which causes

vertical lines to tilt right or left as shown in the following figure…

Fig 4.11 X-shear

X-shear transformation matrix is,

1 0 0

TxSh = a 1 0

0 0 1

49

Page 50: CG3

Where ‘a’ is the shear parameter in x-direction

Y-Shear:

The Y-shear preserves all the x-coordinates but changes the y-coordinate values

which causes horizontal lines to transform into lines which slope up or down as

shown in the following figure…

Fig 4.12 Y shear

Y-shear transformation matrix is constructed as,

1 b 0

Tysh = 0 1 0

0 0 1

Where ‘b’ is the shear parameter in y-direction

To perform a shear in both x and y directions we can use xy-shear transformation

matrix as,

1 b 0

Txysh = a 1 0

0 0 1

Zooming:

Increasing or decreasing the magnification of the displayed objects is called

zooming. Zooming can be performed dynamically using the mouse. The dynamic

zoom facility enlarges or reduces the image according to the movements of the

mouse.

Move the mouse:

Up to enlarge the image.

Down to reduce the image.

Zooming an image:

Zooming an image means changing the number of display pixels per image pixel.

At Zoom=1, there is one display pixel per image pixel. At Zoom=2, there are 2

display pixels per image pixel in both x and y. Other Zooms (powers of 2 only) may

50

Page 51: CG3

be selected by moving the mouse to the corner of the green box cursor in the pan

window, and clicking and drawing the left mouse button.

4.5 INVERSE TRANSFORMATIONS:

Inverse Transformation means for a given transformed point P2=(x2,y2), finding its

original point P1(x1,y1). This is nothing but applying the transformation itself. Using

inverse transformation matrix we can find out this. The multiplication of matrix T

and its inverse matrix T-1 results in identity matrix I,

T * T-1 = T-1 *T = I

We know that to transform a point P1 to point P2, we need transformation matrix T,

P2 = P1.TM

Multiplying both sides by the inverse matrix T-1,

P2 .T-1 = P1 .T. T-1

= P1. I = P1

51

Page 52: CG3

CHAPTER 5: WINDOWING AND CLIPPING

5.1 WINDOW AND VIEW PORT:

Window:

The method for selecting and enlarging portions of a world coordinate object is called

windowing. The rectangular box used for windowing is called a window.

Fig 5.1 Window

View port:

An area on the display surface to which a window is mapped is called view port.

Fig 5.2 View Port

5.2 VIEWING TRANSFORMATION:

Viewing Transformation involves mapping the world coordinate system to the

normalized device coordinates and mapping this normalized device coordinate

system to a physical device coordinate system.

Window to View port Transformation:

The transformation, which maps window in normalized device coordinates to physical

device coordinates, is called window to view port transformation. The following steps

achieve this transformation matrix.

1. Window is translated until the lower left corner is at the origin.

2. Window is scaled until its size is equal to the view port size

3. Finally, translate the view port to its correct location on the screen:

T = T1 * S * T1 -1

52

Page 53: CG3

Fig 5.3 Window to View Port Transformation

The general viewing Transformation is,

1 0 0 (VXH-VXL) 0 0 1 0 0

0 1 0 (WXH-WXL)

-WXL –WYL 1 0 (VYH-VYL) 0 0 1 0

(WYH-WYL)

0 0 1 VXL VYL 1

(VXH-VXL) 0 0

(WXH-WXL)

0 (VYH-VYL) 0

= (WYH-WYL)

(VXL-WXL) (VXH-VXL) (VYL-WYL) (VYH-VYL) 1

(WYH-WYL) (WYX-WYL)

5.3 MULTIPLE WINDOWING:

In multiple windowing, the first image is created by one or more window

transformations on the object. Then windows are applied to this first image to create

second image. Further windowing transformations may be applied until the desired

picture is created.

53

Page 54: CG3

Fig 5.4 Multiple Windowing

5.4 CLIPPING

Removing unwanted portions of an image is called clipping. The region against which

an image is to be clipped is called a clipping window or clipping boundary. This

window is usually in a rectangular shape. The main applications of clipping are

extracting part of a defined scene for viewing, identifying visible surfaces in three

dimensional views, creating objects using solid-modeling procedures displaying a

multi row environment and drawing and painting operations that allow parts of a

picture to be selected for copying, moving, or duplicating.

The clipping algorithm determines which points or lines lie within the clipping

window. These points, lines or portions of lines are displayed. The remaining points

and lines are discarded.

Types of Clipping algorithms:

Point clipping.

Line clipping.

Polygon clipping.

Curve clipping.

Text clipping.

Point clipping: Points are said to be inside the clipping window if they satisfy the

following conditions.

1.wx1 x wx2

2.wy1 y wy2

Where, wx1, wx2, wy1 and wy2 are coordinates of window boundaries.

54

Page 55: CG3

Line clipping: The clipping algorithms used to clip only lines are called line-clipping

algorithms. A line is said to be inside if both the end points are inside the clipping

window. If it does not, we try to determine whether it lies completely outside the

window. We test whether both the end points of a line are completely to the right,

completely to the left, completely above or completely below the window; then the

line is completely outside the window. These lines are rejected.

If we cannot identify a line as completely inside or outside the window, it requires

calculation of multiple intersection points to decide the visible portion of the line. To

reduce the number if intersection calculations, first we test completely visible and

invisible lines are identified and then intersection points are calculated for removing

lines. There are many line clipping algorithms.

P9 P11 P7 P5 P3 P10 P8 P2 P3 P6 P4

Fig 5.5a before clipping b. After clipping

In the above figures, the line P1P2 is completely inside. P3P4 and P9P10 are completely

outside the window, so these line segments are rejected. Remaining line segments

are passing through the window, and intersection points are calculated and outside

portions are rejected.

Cohen-Sutherland Line clipping algorithm:

The Cohen-Sutherland algorithm is the most popular and efficient method for clipping

lines. This algorithm divides the screen and its boundaries into nine equal parts.

Every part is assigned a four bit binary code. These 4-bit codes are called out-codes.

These codes identify the location of the point relative to the boundaries of the

clipping rectangle as shown in below figure.

1001 1000 1010

0001 0000 0010 Window

0101 0100 0110

Fig 5.6 Sutherland out-codes

55

Page 56: CG3

Each bit position in the out-code is used to indicate one of the four relative

coordinate positions of the point with respect to the clipping window, to the left,

right, top and bottom.

If left most bit is set to 1, then the end point is above of the window.

If second bit is set to 1, then the end point is below the window.

If third bit is set to 1, then the end point is right side of the window.

If fourth bit is set to 1, then end point is left side of the window.

Now given end points of a line, we can calculate four bit binary codes according

window coordinates and line end points. Once establishing the out-codes for a given

line segment, we can determine which lines are completely inside the clipping

window and which are completely outside the window. That is if both endpoints are

zeros, then line segment is inside the window. If both the end points are not zeros

then perform logical AND operation of both out-codes. If the result is nonzero then

line is rejected.

Lines that cannot be identified as completely inside or completely outside by these

tests, will be passing through the window. For these lines, intersection points are

calculated with window sides and above procedure is repeated for all remaining line

segments. The procedure is repeated until outside portion of line is clipped.

The intersection points of the line segments with window boundary can be calculated

using the slope intercept form of the line equation. The equation for the line passing

through points P1(x1,y1) and P2(x2,y2) is,

y = m (x-x1)+y1

Or y = m (x-x2)+y2

Where m = (y2-y1)/(x2-x1)

The intersections with the four window edges left, right, top and bottom are:

Left -> intersection point (x,y) is, x=wx1, y= m(wx1-x1)+y1

Right -> intersection point (x,y) is, x=wx2, y= m(wx2-x1)+y1

Top -> intersection point (x,y) is, x=x++(1/m) (wy1-y1), y=wy1

Bottom -> intersection point (x,y) is, x= x1(1/m) (wy2-y1), y=wy2

If the slope of the line is zero, the line is parallel to top and bottom window sides, now

it is necessary only to check left and right edges for intersections. Similarly if the

slope of the line is infinite, the line is parallel to left and right and it is only needed to

check for top and bottom edges for intersection.

Cohen-Sutherland line clipping algorithm:

1.Read the two end points of the given line segment as P1 (x1,y1), P2(x2,y2).

56

Page 57: CG3

2. Read the window coordinates as wx1, wx2, wy1, wy2.

3. Convert given points into Sutherland 4-bit binary code.

4. If both endpoints P1 and P2 are zero then the line is completely inside. So, draw

the line and go to step 10.

5. If both end points are not zero, then take logical AND operation of both the

endpoints and check the result. If result is non zero then that line is completely

outside , then goto step 10.

6. If both the above conditions are not satisfied then that line is partially inside and

partially outside. Now determine intersection points with window sides. If both end

points are non zero ,find intersection points with boundary edges of clipping window

with respect to point to P1 and P2 . If one end point is zero and other is non-zero then

find intersection point P11 or P21 with boundary edges of clipping window with

respect to it.

7. Divide the line segment at intersection points.

8. Reject the line segment if any one endpoint of it appears outsides the clipping

polygon.

9. Draw the remaining line segments.

10.End.

Polygon clipping:

A polygon is a closed region with connected line segments. If we use line-clipping

algorithm directly for polygon, that may clip closed polygon as a collection of lines.

After clipping we will see open polygon as shown in the below figure. So, we need to

modify the line clipping algorithm to clip polygons.

polygon

window

Fig 5.7 (a) Before clipping (b) After clipping

This modified clipping algorithm generates a closed polygon after clipping.

Sutherland-Hodgman polygon clipping:

In this method a polygon is clipped by processing its boundary as a whole against

each window edge. This is achieved by processing all polygon vertices against each

57

Page 58: CG3

clip rectangle boundary it turn. Beginning with the original set of polygon vertices, we

could first clip the polygon against the left boundary to produce a new sequence of

vertices. The new set of vertices could then be successively passed to a right

boundary clipper, a top boundary clipper and a bottom boundary clipper as shown in

the below figure. At each step a new set of polygon vertices is generated and passed

to the next window boundary clipper. This is the basic idea used in Sutherland

Hodgman algorithm.

Polygon

Window

(a) Original polygon and window.

(b) Clipping against left window side (c) Clipping against top window

(d) Clipping against right window (e) Clipping against bottom window

Fig 5.8Clipping

The output of algorithm is a list of polygon vertices all of which are on the visible side

of a clipping polygon. The following tests are considered between edge and clipping

boundary to clip polygons with respect to the window edges.

1.If the first vertex of the polygon edge is left of the window boundary and the

second vertex of the edge is right side of window boundary then the intersection

point of the polygon edge with the window boundary is added to the output list.

58

Page 59: CG3

2.If both the vertices of the edges are left side of the window boundary, only the

second vertex is added to the output list.

3.If both the vertices of the edges are right side of the window boundary, then

nothing is added to the output list.

4.If the first vertex at the edge is right side of the window boundary and the second

vertex of the edge is left side of window boundary then the intersection point with

the window boundary and second vertex are added to the output vertex list.

These tests are applied for every edge of the polygon against all window sides.

Sutherland Hodgman clipping algorithm:

1.Read polygon coordinates of all vertices (Px,Py).

2.Read window coordinates (wx1,wx2,wy1,wy2)

3.Consider the top window boundary.

4.Compare the vertices of each edge of the polygon with the clipping window

Boundary.

5.Store the resulting intersection points and vertices according to four tests of

Sutherland Hodgman.

6.Repeat step 4 and step 5 for remaining sides of clipping window. Each time the

resultant list of vertices is successively passed to process the next edge of the

clipping window.

7.End.

Generalized Clipping:

In Sutherland-Hodgman polygon clipping algorithm we have to use four separate

clipping routines for four window boundaries. They differ only in their test for

determining whether a point in inside or outside the boundary. It is possible to write

there routines in a more general form, so that they will be exactly identical and

information about the boundary is passed to the routines through their parameters.

Using recursive technique the generalized routine can be called for each boundary of

the clipping window with a different boundary specified by its parameters. This form

of algorithm allows us to have any number of boundaries to the clipping window, thus

the generalized algorithm with recursive technique can be used to clip a polygon

along an arbitrary convex clipping window.

59

Page 60: CG3

CHAPTER 6: 3D TRANSFORMATIONS

In 2D graphics we have considered x and y axis. Now in 3D we have to specify one

extra coordinate for every point (z-axis) . In this three angles are arranged at right

angles to each other and labeled as x-axis (width) , y-axis (height) and z-axis (depth).

There are two types of three dimensional reference system according to orientation

of the coordinate axes: 1) Right-handed system and 2) Left-handed system.

60

Page 61: CG3

Fig 6.1 Right Handed System

Fig 6.2 Left Handed System

6.1. 3D Transformations:

Scaling:

Scaling is used to change the size or dimensions of the object. Scaling transformation

of a point with respect to origin is represented in homogeneous coordinate system

as, Sx 0 0 0

0 Sy 0 0

S = 0 0 Sz 0

0 0 0 1

To get new point (P2) after scaling, old point is multiplied with scaling transformation

matrix, P2 = P1 * S

61

Page 62: CG3

Fig 6.3 Scaling Transformation

A scaling transformation of an object with respect to selected point can be

represented with the following steps.

1. Translate the fixed point to the origin.

2. scaling the object.

3. Translate the fixed point back to original position.

ST = T * S T-1

6.1.2 Translation transformation:

Translation transformation is used to change the location of the object on the screen.

Translation transformation matrix of a point in homogeneous coordinate system is

1 0 0 0

0 1 0 0

T = 0 0 1 0

Tx Ty Tz 1

Parameters Tx, Ty, Tz specify translation distances in the direction of x, y, z axes. To

get new point P2, after translation, old point is multiplied with translation

transformation matrix, P2 = P1* T

If P2 is (x1, y1, z1) and P1 is (x, y, z) then the transformation matrix is equivalent to

the following equation , X1 = x + Tx

Y1 = y + Ty

Z1 = z + Tz

62

Page 63: CG3

Fig 6.4 Translating a point Fig 6.5 Translating Object

Rotation transformations:

Rotation transformation is used to change the orientation of picture. In two-

dimensional rotation, all transformations are carried out in the xy plane. Where as in

3D rotation we have to specify as axis of rotation about which the object is to be

rotated along with angle of rotation.

Rotation about x-axis: rotation about x-axis in homogeneous coordinate system is

represented as (counter clockwise)

1 0 0 0

0 cosθ -sinθ 0

Rx = 0 sinθ cosθ 0

0 0 0 1

FIG TO DRAW

Fig 6.6 Rotation About X axis

Rotation about y-axis: rotation about y-axis in homogeneous coordinate system is

represented as (counter clockwise),

63

Page 64: CG3

cosθ sinθ 0 0

0 1 0 0

Ry = sinθ 0 cosθ 0

0 0 0 1

Fig 6.7 Rotation About Y Axis

6.1.3.3 Rotation about z-axis: rotation about z-axis in homogeneous coordinate

system is represented as (counter clockwise)

cosθ sinθ 0 0

-sinθ cosθ 0 0

Rz = 0 0 1 0

0 0 0 1

Fig 6.8 Rotation About Z axis

6.1.3.4 Rotation about arbitrary axis:

Rotation about arbitrary axis can be set up as a composite transformation involving

combinations of translations and the coordinate axes rotations.

When the object is to rotated about an axis that is not parallel to one of the

coordinate axes then we need to perform the following some additional

transformations.

64

Page 65: CG3

1. Translate the object so that the rotation axis passes through the coordinate

axes.

2. Perform rotation about x-axis to take to bring arbitrary axis into xz plane.

3. Perform rotation about y-axis to coincide arbitrary axis with z-axis.

4. Perform rotation about z-axis with given angle θ.

5. Apply inverse rotation about y-axis to bring the rotation axis back to original

orientation.

6. Apply inverse rotation about x-axis to bring the rotation axis again back to

original orientation.

7. Finally retranslate arbitrary axis to original position from origin.

The final transformation matrix for rotation about arbitrary axis is

T = T1 Rx Ry Rz Ry-1 Rx-1 T1-1

Fig 6.9 Steps in rotation about an arbitrary point

6.2 Other transformations:

There are some other transformations that are often useful in three dimensional

graphics applications. These are

1. Reflection.

2. Shearing.

65

Page 66: CG3

6.2.1 Reflection:

Mirror reflection in 3D transformation is the reflection of a point/object relative to a

plane. The reflection about xy plane is

1 0 0 0

RFxy = 0 1 0 0

0 0 -1 0

0 0 0 1

Transformation matrices are defined similarly for reflection about yz plane and xz

plane by inverting x and y values.

Fig 6.10 Reflection Relative To XY Plane

6.2.2 Shearing:

Shearing in two-dimensional graphics can be done in one direction or in two

directions. In 3D graphics to shear an object or point we have to transform either x, y

or z coordinates of the object.

Shearing can be done with respect to an axis. This transformation keeps the value

corresponding to that axis coordinates unchanged.

1 0 0 0

SHz= 0 1 0 0

a b 1 0

0 0 0 1

6.3 3D viewing:

In 3D dimensional viewing process, we specify a view volume in the world

coordinates using modeling transformation.

The following figure shows the conceptual model of the 3D transformation process.

66

Page 67: CG3

Fig 6.11 Conceptual Model of 3D Viewing Transformation Process

6.4 Viewing Parameters:

The 3D- viewing parameters are:

View reference point.

View plane normal.

View distance.

View up vector.

Fig 6.12 Viewing Parameters

6.5. 3D Clipping:

In 2D clipping a rectangular box served as a clipping boundary. But in 3D clipping the

shape of clipping volume plays an important role. The clipping volume can be either a

box or frustum of vision.

67

Page 68: CG3

Fig 6.13 BOX

6.6. PROJECTIONS

The projection is a process of representing a three dimensional object or a scene into

two-dimensional medium. That is projecting three dimensional objects onto the two

dimensional view plane.

There are two basic ways of projecting objects onto the view plane.

1. Parallel projections.

2. Perspective projection.

6.6.1 Parallel Projection:

Parallel projections show the true image, size and shape of the object. In parallel;

projection the z-coordinate is discarded. Parallel lines from each of the vertex on the

object are extended until they intersect the view plane. The projecting lines have

fixed direction. The point of intersection is the projecting of the vertex. We connect

the projected vertices by lines segments that correspond to connections on the

original plane. Parallel projection is mainly used by mechanical engineers to show

views of machine parts. Parallel projection is shown in figure.

68

Page 69: CG3

Figure 6.14 Parallel projection

Parallel projections are categorized in two types, depending on the relation between

the direction of projection and the normal to the projection plane.

Orthographic projection.

Oblique projection.

Orthographic Projection:

In orthographic parallel projections the projections are perpendicular to the view

plane. Orthographic projections are must often used to produce the front, and top

views of an object. Front side and rear orthographic projections of an object are

Called elevations and a top orthographic projection is called a plane view.

Axonometric projection: This is a parallel projection for which the direction of

projection is perpendicular to the view plane.

The isometric projection is a commonly used axonometric projection. An Isometric

projection is generated by aligning the projection plane so that it intersects each

coordinate axis in which object is defined. The isometric projection is obtained by

aligning the projection vector with the case diagonal. There are eight positions one in

each octant for obtaining an isometric view. The isometric projection has the useful

property that three principal axes are equally foreshortened.

69

Page 70: CG3

Fig 6.15 Isometric Projections for a cube

Fig 6.16 Isometric projections shortens all axes equally

Oblique Projection:

An Oblique projection is obtained by projecting points along the parallel lines that

are not perpendicular to the projection plane

Fig 6.17 Oblique projection

The oblique projections are further classified as:

70

Page 71: CG3

Cavalier Projections

Cabinet Projections

In parallel projection, the direction of projection makes 450 angle with the view

points. As a result the projection of a line perpendicular to the view plane has the

same length as the line itself. That is there is no foreshortening. The figure shows

cavalier projection of a unit cube with β=450 and β=300

Fig 6.18 Cavalier Projection of Unit Cube

In cabinet projections the direction of makes an angle of arc tan (2)=63.40 with the

view plane. For this angle line perpendicular to the viewing surface are projected at

one-half their actual length. Cabinet projections appear more realistic than cavalier

projections because of this reduction in the length of perpendiculars. The following

figure shows example of cabinet projections for a unit cube.

Fig 6.19 Cabinet Projections of Unit Cube

Transformation Matrix for parallel projection:

In parallel projection, we may select any direction for the lines of projection. Suppose

that the direction of projection is given by the vector (xp, yp, zp) and that object is to

be projected onto the xy plane. If the point on the object by the vector as (x 1, y1, z1),

then we can determine projected point (x2, y2) as given below

1 0

[ x2 y2] = [ x1 y1 z1] 0 1

-xp/zp -yp/zp

6.6.2 Perspective projections:

In parallel projection we extended parallel lines from every vertex of the object. In

this we extend lines all vertices to coverage at a single point known as centre of

71

Page 72: CG3

projections. This projection is mainly used by civil engineers. The perspective is

defined by center of projection and view plane. Perspective projection can be defined

as

One point

Two point

Three point

A One-point perspective projection occurs when one of the faces of rectangular

object is parallel to the view plane.

Fig 6.20 One Point Perspective

A two-point perspective projection refers to the situation where one set of edges run

parallel to the view plane, but none of the faces is parallel to right.

Fig 6.21 A Two Point Perspective

In three-point perspective projection, none of the edges is parallel to the view plane.

.

Fig 6.22 Three Point Perspective

Transformation matrix for perspective projection:

In a perspective projection the lines of projection are not parallel. All projection lines

are cover pages at a single point (center of projection). In this the further away an

object is from the viewer, the smaller it appears. The intersections of projection lines

72

Page 73: CG3

with the plane of the screen that determine the projected image as shown in the

following figure.

-Zc 0 0 0

[x1 y2 z2 1]=[x1 y1 z1 1] 0 -Zc 0 1

Xc Yc 0 1

0 0 0 ZC

Fig 6.23 Perspective projection

CHAPTER 7: HIDDEN SURFACES If we try to display an image or a picture wherein some objects are blocking the

others, we may not get the feel of the picture. So, we must eliminate the blocked or

hidden surfaces in order to obtain the realistic image. The problem of identification of

such surfaces and removal of these surfaces is called as hidden surface problem.

73

Page 74: CG3

Objects that are closer to the viewer are going to obstruct those, which are away

from the visibility. Hidden-surface elimination algorithms are classified as:

Object Space methods

Image Space Methods

Object space methods compare objects and parts of objects to each other within the

scene definition to determine which surfaces or lines are invisible. Image space

methods compare objects and parts of objects to each other within the scene

definition to determine which surfaces or lines are visible.

7.1 Z–Buffer Algorithm (Depth-buffer):

Step 1: Initialize the Z – buffer and frame buffer such that Z- buffer values are

zeros and frame buffer to background color.

Step 2: During seen conversion process for each position on each polygon surface,

Compare depth values to previously stored values in the depth buffer to

Determine visibility.

Step 3: Stop

Advantages

It is Easy to implement

Works for most an or any geometric primitives

Parallel operation in hardware

Limitations

Quantization and Aliasing artifacts

Overfill

Transparency does not work ell

Fig 7.1 Calculating depth values

74

Page 75: CG3

7.2 Painter’s Algorithm:

Algorithm

Step 1: Sort all polygons in order of decreasing depth

Step 2: Determine all polygons Q (preceding P) in the polygon list whose Z-

coordinate overlap that of P.

step 3 : Perform test 2 through 6 for each Q

(a) If every Q passes the tests, scan convert the polygon P

(b) If test fails for some Q, swap P and Q in the list and make the indication

that q is swapped.

(c) If q has already been swapped, use the plane containing polygon P to

divide polygon Q into two polygons, Q1 and Q2, Replace Q with Q1 and

Q2. Repeat step3.

Advantages

It employs a simple and easy technique

Resolves the problem associated with normal priority assignments

Disadvantages

As the polygons are going to appear one by one, it may be quite

distracting to the user

Due to large list of triplets, sorting becomes too slow

The visual effect may not be appealing

7.3 Scan Line Algorithm:

It is an image space method for removing hidden surfaces. It is an extension of scan

line polygon filling algorithm. As each scan line is processed, it examines all polygon

surfaces intersecting that line to determine which are visible. It then does the depth

calculation and finds which polygon is nearest to the view plane. Finally it enters the

intensity value of the nearest polygon at that position into the frame buffer.

The following figure illustrates the scan line method for hidden surface removal.

75

Page 76: CG3

Fig 7.2 scan line

Advantages

These can be generalized to handle non – polygonal parts

These algorithms can be implemented in hardware

Disadvantages

The amount of calculations may increase, with the number of objects

in the space

In few situations, determining the coherences may become time

consuming

7.4 Warnock’s Algorithm:

This algorithm is an image space algorithm to remove hidden surfaces. This can also

make use of object space operations for depth comparison. This algorithms is based

on the principle area coherence and is developed by Warnock.

In this algorithm the total viewing area is successively divided into smaller and a

smaller simple areas until each small area is the projection of part of a single visible

surface or no surface at all. The successive division is achieved with the help of

recursive sub division process. At each stage in the recursive subdivision process, the

relation ship between projection of each polygon and the area of interest is checked

for four possible relationships (Minmax test or boxing test).

(i) Surrounding Polygon: The polygon is completely covers the entire area

(ii) Overlapping or Intersecting Polygon: One that is partially inside and

partially outside the area

(iii) Contained or Inside Polygon: One that is completely inside the area

(iv) Disjoint or Outside Polygon: One that is completely outside the area

76

Page 77: CG3

Fig 7.3

Algorithm

Step1: Initialize the area to be the whole screen.

Step2: Create a list of polygons by sorting them with their Z–values of vertices. Do

not include disjoint polygons in the list because they are visible.

Step3: Find the relationship of each polygon.

Step4: Perform visibility decision test

i) If all the polygons are disjoint from the area then fill area with background color.

ii) If there is only one intersecting or only one contained polygon, then first fill entire

area with background color and then fill the part of the polygon contained in the

area with the color of polygon.

iii) If there is a single surrounding polygon, but no intersecting or contained

polygons, then fill the area with the color of the surrounding polygon.

iv) If surrounding polygon is closer to the view point, then all the polygons are

hidden by it, fill the area with the color of surrounding polygon

v) If area is a pixel and neither (i), (ii), (iii) and (iv) applies, compute the Z-

coordinate at pixel of all polygons in the list. The pixel is then set to color of the

polygon, which is closer to the viewpoint.

Step5: If none of the above steps are true then subdivide the area and repeat again

from step2.

Advantages

It uses the divide and conquer strategy, therefore parallel computers

can be used to speedup the process

Extra memory buffer is not required.

77

Page 78: CG3

Fig7.4 Sub division of a scene

7.5 Franklin Algorithm:

This is an image-processing algorithm to remove hidden surfaces. In most of hidden

surface removal algorithms, the performance of algorithm decreases as the number

of polygons in the scene increases. This is because increase in the number of

polygons in the scene increases the number of possible comparisons of polygons. For

complex scenes the performance problem becomes more serious. To avoid this

problem Franklin developed an algorithm, which gives linear time behavior for most

of the scenes. This is done by overlapping a grid of cells on the scene dividing the

scene into small cells. But unlike Warnock’s algorithm, here cells are not further sub

divided. The size of cells is kept depending on the sizes of edges in the scene. At

each cell the algorithm looks for a covering face and determines with the edges are I

front of this face. It then computes the intersections of these edges and determines

their visibility. It consumes less time for complex images.

Fig 7.5 Example figure for Franklin algorithm

7.6. BSP Trees:

Binary Space Partition Trees split space along planes. Allows fast queries of some

spatial relations.

Simple construction algorithm:

Select a plane as sub-tree root.

Everything on one side to one child.

Everything on the other side to other child.

Use random polygon for splitting plane.

78

Page 79: CG3

fig 7.6 BSP tree

CHAPTER 8: SHADING

8.1 ILLUMINATION MODELS:

A critical step in the rendering of three-dimensional scenes is the final assignment of

colors to pixels. The perspective projection process determines where a pixel is

drawn on the screen. Visible surface computations determine which pixels are to be

drawn. The illumination models determine how the pixels are assigned colors.

The colors that we perceive on the surfaces of objects result from complex

interactions between light and matter. Illumination models can be either empirical or

79

Page 80: CG3

physically based. An empirical model results from the fitting of a function to observed

data. Physically based models, on the other hand, attempt to simulate the

interactions between light and various materials.

The first Illumination models that we will discuss are largely empirical. Basically they

are simplified formulas, which generate results that fit both the underlying physics

and our experiences.

Light is the most important ingredient in any illumination model. Without light we

don't see anything (i.e. we can just paint all of the pixels black). However, most of

the light that we receive is not directly from the light source. Instead it is reflected off

of intermediate surfaces. Therefore, information describing both the light sources and

the reflective properties of materials are needed to establish any illumination model.

Light Source Properties: The important properties of a light source are:

1. Intensity of the emitted light

2. The distance to the point on the surface

8.1.1 Ideal Diffuse Reflection: First, we will consider a particular type of surface

called an ideal diffuse reflector. An ideal diffuse surface is, at the microscopic level a

very rough surface. Chalk is a good approximation to an ideal diffuse surface.

Because of the microscopic variations in the surface, an incoming ray of light is

equally likely to be reflected in any direction over the hemisphere.

Fig 8.1

80

Page 81: CG3

Fig 8.2

Ideal diffuse reflectors reflect light according to Lambert's cosine law, (there are

sometimes called Lambert Ian reflectors). Lambert's law states that the reflected

energy from a small surface area in a particular direction is proportional to cosine of

the angle between that direction and the surface normal. Lambert's law determines

how much of the incoming light energy is reflected. Remember that the amount

energy that is reflected in any one direction is constant in this model. In other words

the reflected intensity is independent of the viewing direction. The intensity

does however depend on the light source's orientation relative to the surface, and it

is this property that is governed by Lambert's law.

Fig 8.3

The angle between the surface normal and the incoming light ray is called the angle

of incidence and we can express a intensity of the light in terms of this angle.

Fig 8.4

81

Page 82: CG3

The Ilight term represents the intensity of the incoming light at the particular

wavelength (the wavelength determines the light's color). The kd term represents the

diffuse reflectivity of the surface at that wavelength.

When computing this equation we can take use vector analysis to compute this

cosine term indirectly. If both the normal vector and the incoming light vector are

normalized (unit length) then the diffuse shading component can be computed by:

In this equation we need only consider angles from 0 to 90 degrees. Greater angles

are blocked by the surface, and the reflected energy is 0. Below are several

examples of a spherical diffuse reflector with varying lighting angles.

8.1.2 Specular Reflection:

A second surface type is called a specular reflector. When we look at a shiny surface,

such as polished metal or a glossy car finish, we see a highlight, or bright spot.

Where this bright spot appears on the surface is a function of where the surface is

seen. This type of reflectance is view dependent.

At the microscopic level a specular reflecting surface is very smooth, and usually

these microscopic surface elements are oriented in the same direction as the surface

itself. Specular reflection is merely the mirror reflection of the light source in a

surface. Thus it should come as no surprise that it is viewer dependent, since if you

stood in front of a mirror and placed your finger over the reflection of a light, you

would expect that you could reposition your head to look around your finger and see

the light again. An ideal mirror is a purely specular reflector.

In order to model specular reflection we need to understand the physics of reflection.

Reflection behaves according to Snell's laws, which state:

The incoming ray, the surface normal, and the reflected ray all lie in a

common plane.

The angle that the reflected ray forms with the surface normal is determined

by the angle that the incoming ray forms with the surface normal, and the

relative speeds of light of the mediums in which the incident and reflected

rays propagate according to the following expression.

82

Page 83: CG3

Fig 8.5

Reflection is a very special case of Snell's Law where the incident light's medium and

the reflected rays medium is the same. Thus we can simplify the expression to:

Snell's law, however, applies only to ideal reflectors. Real materials, other than

mirrors and chrome tend to deviate significantly from ideal reflectors. At this point

we will introduce an empirical model that is consistent with our experience, at least

to a crude approximation.

In general we expect most of the reflected light to travel in the direction of the ideal

ray. However, because of microscopic surface variations we might expect some of

the light to be reflected just slightly offset from the ideal reflected ray. As we move

farther and farther, in the angular sense, from the reflect ray we expect to see less

light reflected.

83

Page 84: CG3

Fig 8.6

One function that approximates this falloff is called the Phong Illumination model.

This model is purely empirical and has no physical basis, yet it is one of the most

commonly used illumination models in computer graphics.

Fig 8.7

The diagram below shows the how the phong reflectance drops off based on the

viewers angle from the reflected ray for various values of nshiny.

Fig 8.8

84

Page 85: CG3

We can compute the cos() term of Phong's specular illumination using the following

relationship.

The V vector in the unit vector in the direction of the viewer and the R mirror

reflectance direction. The vector R can be computed from the incoming light direction

and the surface normal as shown below.

The following figure illustrates this relationship.

Fig 8.9

Another approach for computing Phong's illumination uses the following equation:

In this equation the angle of specular dispersion is computed by how far the surface's

noraml is from a vector bisecting the incoming light direction and the viewing

direction.

85

Page 86: CG3

.

Fig 8.10

On your own you should consider how this equation and the previous equation differ.

The following spheres illustrate specular reflections as the direction of the light

source and the coefficient of shininess is varied.

Fig 8.11

8.2 Shading:

86

Page 87: CG3

8.2.1 The Need for shading:

• In the last lecture we added realism to graphics scenes by drawing them as solids.

• Object faces were either filled with a plain color or with a mapped texture.

• In real scenes, the reflected light intensity is also important.

Fig 8.12

8.2.2 The Physics of shading:

• If we look at a point on an object we perceive a colour and a shading intensity that

depends on the various characteristics of the object and the light sources that

illuminate it.

• For the time being we will consider only the brightness at each point. We will

extend the treatment to color in the next lecture.

8.2.3 Object Properties:

• Looking at a point on an object we see the reflection of the light that falls on it. This

reflection is governed by:

1. The position of the object relative to the light sources

2. The surface normal vector

3. The albedo of the surface (ability to adsorb light energy)

4. The reflectivity of the surface

8.2.4 Surface Characteristics:

• Surfaces can have reflective characteristics ranging from:

1. Dull, (matt) surfaces

2. Shiny (specular) surfaces

3. Reflective (mirror) surfaces.

8.2.5 Matt surfaces:

87

Page 88: CG3

Fig 8.13 Perfectly Matt surface

The reflected intensity is the same in all directions

8.2.6 Lambert’s Cosine Law:

• Ireflected = Ii kd Cos()

where,

– Ii is the intensity of the incident light

– kd is a constant (surface albedo)

– and is the angle between the surface normal and the direction of the light

source Colour

• Strictly speaking the surface properties will vary with the wavelength of the light ()

so we should write: Ireflected() = Ii() kd() Cos()

• However we will consider just the intensity component for the moment.

Fig 8.14

8.2.7 Using Shading:

• There are three levels at which shading can be applied in polygon based systems:

Flat Shading ,Gouraud Shading and Phong Shading

8.2.8 Flat Shading:

• Each polygon is shaded uniformly over its surface.

88

Page 89: CG3

• The shade is computed by taking a point in the centre and the surface normal

vector. (Equivalent to a light source at infinity)

• Usually only diffuse and ambient components are used.

8.2.9 Shading from different light sources:

Fig 8.15

8.2.10 Interpolation Shading:

• A more accurate way to render a shaded polygon is to compute an independent

shade value at each point.

• This is done quickly by interpolation:

1. Compute a shade value at each vertex

2. Interpolate to find the shade value at the boundary

3. Interpolate to find the shade values in the middle

8.2.11 Calculating the shades at the edges:

89

Page 90: CG3

Fig 8.16

8.2.12 Calculating the internal shades:

Fig 8.17

8.2.13 Smooth Shading:

• Need to have per-vertex normals

• Gouraud Shading

– Interpolate color across triangles

– Fast, supported by most of the graphics accelerator cards

• Phong Shading:

– Interpolate normals across triangles

– More accurate, but slow. Not widely supported by hardware

90

Page 91: CG3

Fig 8.18

8.2.14 Interpolating over polygons: Gouraud Shading:

• In addition to interpolating shades over polygons, we can interpolate them over

groups of polygons 0 to create the impression of a smooth surface.

• The idea is to create at each vertex an averaged intensity from all the polygons

that meet at that vertex.

Computing an average normal vector at a vertex

Fig 8.19

• The averaged normal is used to compute a shade value for the vertex.

• This is then interpolated as before.

• The boundaries of the polygons are thus smoothed out.

Gouraud Shading With no Specular Highlights

91

Page 92: CG3

Fig 8.20

Gouraud Shading With Specular Highlights

Fig 8.21

8.2.15 Phong Shading:

• One limitation of Gouraud shading is that we cannot determine specular

components accurately, since we do not have the normal vector at each point on a

polygon.

• A solution is to interpolate the normal vector, not the shade value.

8.2.16 Interpolation of the 3D Normals:

• We may express any point for this facet in parametric form:

P = V1 + m1(V2 - V1) + m2(V3-V1)

• The average normal vector at the same point may be calculated as the vector a:

a = n1 + m1(n2 - n1) + m2(n3-n1)

and then

naverage = a / | a |

8.2.17 2D or 3D:

• the interpolation calculations may be done in either 2D or 3D

92

Page 93: CG3

• for specular reflections the calculation of the reflected vector and viewpoint vector

must be done in 3D.

Phong Shading

Fig 8.22

CHAPTER 9: CURVES AND SURFACES

93

Page 94: CG3

An image in graphics can contain many different kinds of objects (wood, plastic etc.).

To model and display these objects involves curves. Some natural objects are neither

perfectly flat nor smoothly curved but often have rough, jagged contours. Curve

drawing has fascinating applications and quiet a few methodologies exist to draw

free form curves. For quality graphics and commercial animation package, the

approximation of smooth curves provides attractive results.

In this chapter we are going to see various methods for generating curves and

surfaces.

9.1 Generation of Curves:

We can use two approaches to draw curved lines. One approach is to use a curve

generation algorithm such as DDA. The other approach is, the curve approximated by

a number of small straight-line segments. This approximation is achieved with the

help of interpolation techniques.

9.1.1 Circular Arc Generation Using DDA algorithm:

1. Read the center of a curvature, say (x1, y1)

2. Read angle or arc θ

3. Read the starting point of the arc, say (x2, y2)

4. Calculate dθ; dθ = Min (0.01, 1/ (3.2(|x2-x1|+|y2-y1|)))

5. Initialize angle=0

6. While (angle<θ) do

{

Plot (x2, y2)

X2=x2- (y2-y1)*dθ

Y2=y2+(x2-x1)*dθ

}

7. End

9.1.2 Interpolation:

In computer graphics we have to deal with some complex curves for which no direct

mathematical function is available. Such curves can be drawn using approximation

methods. If we have array of sample points which lie on the required curve , then we

can draw the required curve by filling portions of the curve with pieces of known

curves which pass through nearby sample points. We fit a portion of the unknown

curve with a curve that we known. The gap between sample points can be filled by

finding the coordinates of the points along the known approximating curve and

connecting these points with the line segments as shown in following figures.

94

Page 95: CG3

Fig 9.1 The Interpolation Process

There are many functions forms, which can be made to pass through sample points

by adjusting parameters. Polynomial, trigonometric, exponential and other classes of

functions have been used. There are many options with in each class. Usually

polynomial functions in the parametric form are preferred.

Blending functions for four sample points can be given as

B1(u)= u(u-1)(u-2) / (-1)(-2)(-3)

B2(u)= (u+1)(u-1)(u-2) / 1(-1)(-2)

B3(u)= (u+1)(u)(u-2) / (2)(1)(-1)

B4(u)= (u+1)(u)(u-1) / (3)(2)(1)

Using above blending functions, the expressions for the curve passing through

sampling points can be realized as follows:

x= x1 B1(u) + x2 B2(u) + x3 B3(u) + x4 B4(u)

y= y1 B1(u) + y2 B2(u) + y3 B3(u) + y4 B4(u)

z= z1 B1(u) + z2 B2(u) + z3 B3(u) + z4 B4(u)

9.2 Spline Representation:

A spline is used to produce smooth curves. It is a flexible strip used to produce a

smooth curve through a designated set of points, called control points, which

indicates the general shaper of the curve. When polynomial sections are fitted so

that the curve passes through all control points as shown in the figure, the resulting

curve is said to be interpolate the set of control points. On the other hand, when the

polynomials are fitted to the path that is not necessarily passing through all control

95

Page 96: CG3

points, the resulting curve is said to approximate the set of control points. This is

shown in figure

Splines are used in graphics applications to design curve and surface shapes. The

application of spline includes the design of automobile bodies, aircraft and spacecraft

surfaces, and ship halls. Interpolation curves are commonly used to digitize drawings

or to specify animation paths. Approximation curves are primarily used as design

tools to structure object surfaces.

9.3 Spline Specification:

The following are the basic ways of specifying spline curve:

(i) We can state the set of boundary conditions that are imposed

on the spline

(ii) We can state the matrix that characterizes the spline.

(iii) We can state the set of blending functions that calculate the

positions along the curve path by specifying combinations of

geometric constraints on the curve.

9.4 Cubic Polynomials:

Poly lines and polynomials are the first degree, piecewise linear approximation to

curves and surfaces respectively. But these lower degree polynomials give too little

flexibility in controlling the shape of the curve. The lighter degree polynomials give

reasonable design flexibility, but introduce unwanted wiggles and also require more

computation. For this reason the third degree polynomials are must often used for

representing of curves. These polynomials are commonly known as cubic

polynomials.

We can describe the parametric cubic polynomial that is to be fitted between each

pair of control points with the following set of equations.

x (u) = a x u3 + b x u2 +c x u + dx

y (u) = a y u3 + b y u2 +c y u + dy

z (u) = a z u3 + b z u2 +c z u + dz (0 u 1)

For each of these three equations, we need to determine the values of a,b,c and d in

the polynomial representation, for each of the n curve sections between the n+1

control points. We do this by setting enough boundary conditions at the joints

between the curve sections so that we can obtain numerical values for all the

coefficients.

96

Page 97: CG3

In the next section we consider common methods foe setting boundary conditions for

cubic interpolation curves.

9.5 Hermite Curves:

A Hermite curve is an cubic polynomial curve segment determined by constraints on

the end points P1 and P2 and tangent vectors at the end points DP1 and DP2. This is

shown in the following figure. Dp1

P1

P(u)=[x(u),y(u),z(u)]

P2

Dp2

Fig 9.2

Hermite blending functions.

P(u) = P1(2u3 – 3u2 + 1) + P2(-2u3 + 3 u2) + DP1(u3 – 2 u2 + u) + DP2(u3 – u2)

= P1 H0(u) + P2 H1(u) + DP1 H2(u) + DP2 H3(u)

The polynomials Hi(u) for i=0,1,2,3 are referred to as blending functions because

they blend the boundary constraint values to obtain each coordinate position along

the curve

9.6 Bezier Curves:

This spline approximation method was developed by the French engineer

Pierre Bezier for use in the design of Renault automobiles bodies. Bezier splines have

a number or properties that make them highly useful and convenient for curve and

surface design. They are also easy implement. For these reasons, Bezier splines are

widely available in various CAD systems, in general graphics packages (such as GL

on Silicon Graphics systems), and in assorted drawing and painting package (such as

Aldus SuperPaint and Cricket Draw).

In general, a Bezier curve section can be fitted to any number of control

points. The number of control points to be approximated and their relative position

determine the degree of the Bezier polynomial. As with the interpolation splines, a

Bezier curve can be specified with boundary conditions, with a characterizing matrix,

or with blending functions. For general Bezier curves, the blending-function

specification is the most convenient.

97

Page 98: CG3

Suppose we are give n + 1 control-point positions: Pk =

(xk, yk, zk), with k varying from 0 to n. These coordinate points can be blended to

produce the following position vector P(u), which describes the path of an

approximating Bezier polynomial function between P0 and Pn.

The Bezier blending functions BEZk,n(u) are the Bernstein polynomial

where the C(n,k) are the binomial coefficient

Equivalently, we can define Bezier blending functions with the recursive calculation.

with BEZk,k = uk, and BEZ0,k = (1 – u)k. Vector equation 10-40 represents a set of three

parametric equation for the individual curve coordinates:

As a rule, a Bezier curve is a polynomial of degree one less than the number of

control points used: Three points generate a parabola, four points a cubic curve, and

so forth. Figure 10-34 demonstrates the appearance of some Bezier curves for

various selections of control points in the xy plane (z = 0). With certain control-point

placements, however, we obtain degenerate Bezier polynomials. For example, a

Bezier curve generated with three collinear control points is a straight-line segment.

And a set of control points that are all at the same coordinate position produces a

Bezier “curve” that is a single point.

98

Page 99: CG3

Bezier curves are commonly found in painting and drawing packages, as well as CAD

systems, since they are easy to implement and they are reasonably powerful in curve

design. Efficient methods for determining coordinate positions along a Bezier curve

can be set up using recursive calculations. For example, successive binomial

coefficients can be calculated as

n-k+1

C(n,k) = C(n,k-1)

k

for n k. the following example program illustrates a method for generating Bezier

curves.

99

P0

P1

P2

P0

P1P2

P3

P0

P1

P2

P3

P0

P1

P2

P3

P4

Page 100: CG3

Fig 9.3 Examples of two-dimensional Bezier’s curves.

Properties of Bezier Curves:

A very useful property of a Bezier curve is that it always passes through the first and

last control points. That is, the boundary conditions at the two ends of the curve are

P(0) = P0

P(1) = Pn

Values of the parametric first derivatives of a Bezier curve at the end points can be

calculated from control-point coordinates as

P’(0) = - nP0 + nP1

P’(1) = - nPn-1 + nPn

Thus, the slope at the beginning of the curve is along the line joining the first two

control points, and the slope at the end of the curve is along the line joining the last

two endpoints. Similarly, the parametric second derivatives of a Bezier curve at the

endpoints are calculated as

P”(0) = n(n-1)[(P2 – P1) – (P1 – P0)]

P”(1) = n(n-1)[Pn-2 – Pn-1) – (Pn-1 – Pn)]

Another important property of any Bezier curve is that it lies within the convex hull

(convex polygon boundary) of the control points. This follows from the properties of

Bezier blending functions: They are all positive and their sum is always 1,

so that any curve position is simply the weighted sum of the control-point positions.

The convex-hull property for a bezier curve ensures that the polynomial smoothly

follows the control points without erratic oscillations.

100

Page 101: CG3

Design Techniques Using Bezier Curves:

Closed Bezier curves are generated by specifying the first and last control points

closed Bezier curves are generated by specifying the first and last control points at

tae same position, as in the example shown in fig 10-35. Also, specifying multiple

control points at a single coordinate position gives more weight to that position. In

Fig. 10-36, a single coordinates position is input as two control points, and the

resulting curve is pulled nearer to this position.

We can fit a Bezier curve to any number of control points, but this requires the

calculation of polynomial functions of higher degree. When complicated curves are to

be generated, they can be formed by piecing several Bezier sections of lower degree

together. Piecing together smaller sections also gives us better control over the

shape of the curve in small regions. Since Bezier curves pass through endpoints, it is

easy to match curve sections (zero-order continuity). Also, Bezier curves have the

important property that the tangent to the curve at end point is along the line joining

that end point to the adjacent control point. When the two curve sections have the

same number of the control points , we obtain C1 continuity by choosing the first

point of the new section as the last control point of the previous section and by

positioning the second control point of the new section at position.

Pn + (Pn- Pn-1)

Thus, the three control points are collinear and equally spaced.

We obtain C1 continuity of Bezier’s curve sections by calculating the position of the

third control point of new section in terms of the positions of the last three control

points of the previous section as

Pn-2 + 4(Pn-Pn-1)

Cubic Bezier’s curves:

Cubic Bezier curves are generated with four control points. The four blending

functions for cubic Bezier curves, obtained by substituting n=3 into BEZ k,n = C(n,k)

Uk (1-u)u-k

Are

BEZ 0,3 (u) = (1-u)3

BEZ 1,3 (u) = 3u (1-u)2

BEZ 2,3 (u) = 3 u2(1-u)

101

Page 102: CG3

BEZ 3,3 (u) = u3

At the end positions of the cubic Bezier’s curve, the parametric first derivatives are P1 (0) = 3(P1-P0), P1 (1) = 3(P3-P2)

And the parametric second derivatives are P11 = 6(P0-2P1+P2), P11 = 6(P1-2P2-P3)

We can use these expressions for the parametric derivatives to construct piecewise

curves with C 1 or C2 continuity between sections. By expanding the polynomial

expressions for the blending functions, we can write the cubic Bezier point function in

the matrix form

P0 P(u) = u3 u2 u1 0 ].M Bez P1 P2 P3

Where the Bezier’s matrix is -1 3 -3 1 3 -6 3 0 MBez = -3 3 0 0 1 0 0 0

9.7 Spline Curves:

B – Spline contains the Bernstein values basis as a special case. The B-spline basis is

non global. It is non global because each vertex Bi is associated with a unique basis

function.

Thus, each vertex affects the shape of the curve only over a range of parameter

values where its associated basis function is nonzero. The B-spline basis also allows

the order of the basis function and house the degree of the resulting curve is

independent on the number of vertices. It is possible to change the degree of the

resulting curve without changing the number of vertices of the defining polygon.

A general expression for the calculation of coordinate positions along B-spline curve

in a blending function formulation can be written as

n+1

P(u) = Σ Bi Ni,k(u) Umin ≤ u ≤ umax 2≤k≤n+1

i=1

102

Page 103: CG3

9.8 Surfaces:

The surface is defined by a net or mesh of orthogonal plane curves lying in plane

sections plus multiple orthogonal projections of certain three dimensional feature

lines. Surfaces and their description play a critical role in design and manufacturing.

Surface description also plays an important role in the representation of data

obtained from medical, geological, physical and other natural phenomena.

9.8.1 Bezier surfaces:

Bezier surfaces have the same properties as Bezier curves, and they provide a

convenient method for interactive design applications. Two sets of orthogonal Bezier

curves can be used to design an object surface by specifying an input mesh of

control points.

Fig9.4 Bezier surfaces

9.8.2 B- Spline Surfaces:

B – Spline Surfaces exhibit the same properties as these of their component B –

Spline curves. A B- Spline surface can be constructed from selected values for

parameters d1 and d2 and from the specified knot vector.

Formation of B – Spline surface is similar to that of Bezier splines.

9.8.3 Hermite Surfaces:

Applying similar procedure as that of Bezier surface we can get Hermite surface.

103

Page 104: CG3

Fig 9.5 Hermite surface

9.9 Sweeping method of interpolation:

Sweep representations are useful for constructing three-dimensional objects that

posse’s rotation, translation or other symmetries. We can represent such objects by

specifying a two dimensional shape and a sweep that moves the shape through a

region of space.

Rotational sweep example is shown in the following figure. This time, the periodic

spline cross section is rotated about an axis of rotation specified in the plane of the

cross section to produce the wire frame representation shown in fig (b).

Fig 9.6 Shows as: Constructing a solid with a translation sweep Translating the

control points of the periodic spline curve in (a) generates the solid shown in (b)

whose surface can be described with point P(u,v).

104

Page 105: CG3

Fig 9.7

Fig 9.7 shows as: Constructing a solid with a rotational sweep. Rotating the control

points of the periodic spline curve in (a) about the given rotation axis generates the

solid as shown in (b) whose surface can be described with point P(u,v).

Chapter 10: COLOR FOR RASTER GRAPHICS

We have seen the different mechanisms for display devices (Color displays) in the

first chapter. Now we concentrate on how to color is represented on a monitor and

different mechanisms for generating color displays with combination of red, green

and blue light. Color is an immensely complex subject, one that draws on concepts

and results from physics, physiology, art and graphic design. The color of an object

depends not only on the object itself, but also on the light source illuminating it, on

the color of the surrounding area, and on the human visual system. In this chapter,

we introduce some of the areas of color that are most relevant to computer graphics.

10.1 Displaying Light Intensities:

105

Page 106: CG3

Intensity values for each surface calculated by a shading model should be converted

to one of the allowable intensity levels for each particular graphic system in use.

Allowable pixel intensity levels rage from 0 to 1. Zero is indicating off pixel position

and one indications maximum intensity. Remaining values are spaced over 0 to 1, so

that the ratio of successive intensity is constant. If n intensity levels one to be

assigned with lowest level called I1 and highest is In they we require

I2/I1 = I3/I2 = ………. In/In-1=r

r is determined from the values I1 and n by expressing in terms of I1.

In = r n-1 I1

But In =1 r=[1/I1]1/(n-1)

I1 is convenient value near 0.

For example if we want set three intensity levels above zero, with I1=1/4 for a

system. n-1=2

Four possible intensity levels for a pixel are assigned the values 0,1/4,1/2,1.

10.2 Chromatic Color:

Achromatic light is what we see on a black and white television set or display

monitor. An observer of achromatic light normally experiences none of the sensations

with red, blue, yellow and so on. Quality of the light is the only attribute of

achromatic light.

The visual sensations caused by colored light are main richer than those caused

achromatic light. Discussions of color perception usually involve three quantities.

These are hue, saturation and lightness. Hue distinguishes among colors such as red,

green, purple and yellow. Saturation refers to how far a color is from gray of equal

intensity. Red is highly saturated, pink is relatively unsaturated, royal blue is highly

saturated, sky-blue is relatively unsaturated. Unsaturated colors include more white

light than saturated colors. Lightness embodies the achromatic notion of perceived

intensity of a reflecting object. Brightness is used instead of lightness to refer to the

perceived intensity of a self-luminous object, such as light blue, the sun, or a CRT.

It is required to specify and measure colors if we use them in computer graphics. For

reflected light we can do this by visually comparing a sample of unknown color

against a set of standard samples. The unknown and sample colors must be viewed

under a standard light source. Since the perceived color of a surface depends both on

106

Page 107: CG3

the surface

and on the

light under

which the

surface

viewed.

Artists often specify colors as different tints, shaded and tones of strongly saturated

or pure pigments. A tint result from adding white pigment to a pure pigment, there

by decreasing saturation. A shade comes form adding a black pigment to a pure

pigment there by decreasing lightness. A tone is a consequence of adding both black

and white pigments to a pure pigment. All these produce different colors of the same

hue with varying saturation and lightness. Mixing just black and white pigments

creates grays.

10.2.1 Properties of light:

The pigment mixing methods depend on human observers, judgments, the lightning,

and the size of the sample, the surrounding color and the overall lightness of

environment. To know specifying of colors, we have to refer colormetry from physics.

The terms in colormetry are dominant wavelength, excitation purity and luminance.

Dominant wavelength is the wavelength of the color we seen when viewing the light.

The excitation purity of a colored light is the proportion of pure light of the dominant

wavelength and of white light needed to define the color. A completely pure color is

100 percent saturated and thus contains no white light, where as a mixtures of a

pure color and white light have saturation some where between 0 and 100 percent.

White light and grays are 0 percent saturated, containing no color of any dominant

wavelength. The correspondences between these perceptual and colormetry terms

are as follows:

Fundamentally light is electromagnetic energy in the 400 to 700nm wavelength part

of the spectrum, which is perceived as the colors from violet through indigo, blue,

green, yellow, and orange red. The amount of energy present at each wavelength is

represented by a spectral energy distribution p(λ). This is shown in the following

figure.

107

Perceptual Colorometry

Hue

Saturation

Lightness (reflecting objects)

Brightness (self-luminous objects)

Dominant wavelength

Excitation purity

Luminance

Luminance

Page 108: CG3

Fig10.3 Spectral Energy Distribution of a Light

The distribution represents on infinity of numbers, one for each wavelength in the

visible spectrum. We can describe the visual effect of any spectral distribution much

more concisely by the dominant wavelength, excitation purity and luminance. This

implies that many different spectral energy distributions produce the same color.

Hence the relationship between spectral distributions produce the same color, hence

the relation ship between spectral distributions and colors is many-to-one. Two

spectral energy distributions that look at the same are called metamers.

The figure 10.4 shows one of the infinitely many spectral distributions p(λ), or

metamers that produce a certain color sensation. At the dominant wavelength, there

is a spike of energy of level e2. White light, the uniform distribution of energy e1,is

also present. The excitation purity depends on the relation between e1 and e2, when

e1=e2 excitation purity is 0 percent. When e1=0 excitation purity is 100 percent.

Brightness, which is proportional to the integral of the product of the curve and the

luminous efficiency functions depends on both e1 and e2. In general spectral

distribution may be more complex than one shown, and it is not possible tom

determine the dominant wavelength merely by looking at the spectral distributions.

In particular the dominant wavelength may not be one whose component in the

distribution is largest.

108

Page 109: CG3

Fig.10.4 Spectral Energy Distribution of a Light

According to tri stimulus theory of color perception, the retina has three kinds pf

color sensors called cones. With peak sensitivity red, green, or blue lights.

Experiments based on this hypothesis produce the spectral response functions are

shown in the figure 10.5. the peak blue response is around 440nm, green is about

545nm and red is about 580nm. The curves in the figure suggests that there eye’s

response to blue light is much less strong than is its response to red or green.

Fig 10.5 Spectral response Functions

Fig 10.5 shows the luminous-efficiency function, the eye’s response to light of

constant luminance, as the dominant wavelength is varies. Our peak sensitivity is to

yellow-green light of wavelength around 550nm. There is experimental evidence

that this curve is just the sum of three curves shown in fig 10.5. The tri stimulus

theory is intuitively attractive it corresponds loosely to the notion that colors can be

specifies by positively weighted sums of red, green, and blue. This notion is almost

true.

The three color matching functions in fig.10.7 shows the amounts of red, green, and

blue light needed by an average observer to match a color of constant luminance,

109

Page 110: CG3

for all values of dominant wavelength in the visible spectrum. A negative value in

fig.10.7 means that we cannot match the color by adding together the primaries.

However, if one of the primaries is added to the color sample, the sample can then

be matched by a mixture of the other two primitives. Hence negative values indicate

that primary was added to the color being matched. The need for negative values

does not mean that the notion of mixing red, green, and blue to obtain other colors is

invalid. On the contrary a huge range of colors can be matched by positive amounts

of red, green, and blue. Otherwise the color CRT would not work, That means certain

colors cannot be produced by RGB mixtures, and hence cannot be shown on an

ordinary CRT

Fig 10.6 Luminous – Efficiency Function for the Human Eye

Fig10.7 Color Matching Functions

10.2.2 CIE Chromaticity Diagram:

No finite set of color light sources can be combined to display all possible colors.

Three standard primaries were defined in 1931 by the International Commission on

Illumination , referred to as CIE. The three standard primaries are imaginary colors.

They are defined mathematically with positive color matching functions. The set of

110

Page 111: CG3

CIE primaries is generally referred to as X, Y, Z. These X,Y and Z represent vectors in

3D additive color space. Any color Cλ is then expressed as

Cλ = XX+YY+ZZ

Where X,Y and Z designate the amounts of the standard primaries needed to match

Cλ .

In discussing color properties, it is convenient to normalize the amounts in above

equation against luminance (X+Y+Z). Normalized amounts they calculated as

x = X/X+Y+Z,

y= Y/X+Y+Z,

z= Z/X+Y+Z.

with X+Y+Z=1. Thus, any color can be represented with just the x and y amounts.

Since we have normalized against luminance, parameters x and y are called the

chromaticity values because they depend only on the hue and purity. Also if we

specify colors with only x and y values, we cannot obtain the values X,Y and Z.

Therefore , a complete description of a color is typically given with three values x, y,

and Y. The remaining CIE values are then calculated as

X=x/y * Y, Z=z/y * Y

Where Z=1-x-y.

Using chromaticity coordinates (x,y), we can represent all colors on a 2D diagram.

When we plot the normalized amounts x and y for colors in the visible spectrum, we

obtain the tongue-shaped curve shown in fig.10.8. This curve is called the CIE

chromaticity diagram. Points along the curve are the “pure” colors in the

electromagnetic spectrum, labeled according to wavelength in nanometers from the

red end to the violet end of the spectrum. Interior points represent all possible visible

color combinations. Point C in the diagram corresponds to the white-light position.

Actually this point is plotted for a white-light source known as illuminant C, which is

used as a standard approximation for “average “ day light.

111

Page 112: CG3

Fig 10.8 CIE Chromaticity Diagram

Luminance values are not available in the chromaticity diagram because of

normalization. Colors with different luminance but the same chromaticity map to the

same point. The chromatically diagram is useful for the following:

Comparing Color gamut’s for different sets of primaries.

Identifying complementary colors.

Determining dominant wavelength and purity of a given color.

Color gamut’s are represented given color on the diagram as straight line

Segments or as polygons. All colors along the line joining points C1 and C2 in fig 10.9

can be obtained by mixing appropriate amounts of the colors C1 and C2. If a color

gamut for three points, such as C3, C4 and C5 in fig 10.9 is a triangle with vertices at

the three color positions. Three primaries can only generate colors inside or on the

bounding edges of the triangle. Thus, the chromaticity helps us understand why no

set of three primaries can be additively combined to generate all colors, since no

triangle within diagram can encompass all colors.

Fig 10.9 Color Gamut’s Defined on The Chromaticity Diagram for Two Color And Three Color

System

112

Page 113: CG3

Fig 10.10 Representing complementary colors in the diagram

Fig10.11 determining dominate wavelength and purity with the diagram

Color gamut’s for video monitors and hard-copy devices are conveniently compared

on the chromaticity diagram.

Since the color gamut for two points is a straight line, complementary colors must be

represented on the chromatically diagram as two points situated on opposite sides of

C and connected with a straight line. When we mix proper amounts of the two colors

C1 and C2 in fig 10.10. We can obtain white light.

We can also use the interpretation of color gamut for two primaries to determine the

dominant wavelength of a color. For color point C1 in fig10.10, we can draw a

straight lie from C through C1 to intersect the spectral curve at point Cs. Color C1 can

then be represented as a combination of white light C and the spectral color Cs,

Thus, the dominant wave length of C1 is Cs.

This method for determining dominant wavelength will not work for color points that

are between C and the purple line. Drawing a line from C through point C2 fig 10.10

takes us to point Cp on the purple line, which is not in the visible spectrum. Point C2

s referred to as a non-spectral color, and its dominant wavelength is taken as the

compliment of Cp that lies on the spectral curve (point Csp). Non-spectral colors are

in the purple-magenta range and have spectral distributions with subtractive

113

Page 114: CG3

dominant wavelengths. They are generated by subtracting the spectral dominant

wavelength (such as Csp) from white light.

For any color point, such as C, in fig 10.10 we determine the purity as the relative

distance of C1 from c along the straight line joining C to Cs. If dC1 denotes the

distance from C to C1 and dCs is the distance from C to Cs, we can calculate purity as

the ratio dC1 and dCs. Color C1 in this figure is about 25 percent puce, since it is

situated at about on-fourth the total distance from C to Cs. At position Cs, the color

point would be 100 percent pure.

10.3 Color Models for Raster Graphics:

A color is a specification of a 3D color coordinate system and a visible subset in the

coordinate system within which all colors in a particular color gamut. The purpose of

a color model is to allow convenient specification of colors within some color gamut.

In this section we discuss the following color models:

RGB color model

YIQ color model

CMY color model

HSV color model

HLS color model

First three-color models are hardware oriented color models. RGB is used

With CRT monitors, YIQ is used with broadcast CMY for some color printing devices.

The last two models are les complex and user oriented color models.

10.3.1 RGB color Model:

The RGB color model is used in color CRT monitors and color raster graphics employs

a Cartesian coordinate system. Tristimulus theory of vision is the basic for displaying

color output on a video monitor using three-color primaries referred to as RGB model.

We can represent this model with the unit cube defined on R, G and B axes as shown

in fig 10.8. The origin represents black and vertex with coordinates (1,1,1) is white.

Vertices of the cube on the axes represent the primary colors, and remaining vertices

the complimentary color for each of the primary colors.

The RGB color scheme is an additive model. Intensities of primary colors are added to

produce other colors. Each color point within the bounds of the cube can be

represented as the triple(R, G,B), where values for R,G,B are assigned in the range

from 0 to 1. Thus a color Cλ expressed in RGB component as

114

Page 115: CG3

Cλ =RR+GG+BB

The magenta vertex is obtained by adding red and blue to produce triple(1,0,1) and

white at (1,1,1) is the sum of the red, green, blue vertices. Shades of gray are

represented along the main diagonal has an equal contribution from each primary

color, so that a gray shade half way between black and white is represented as

(0.5,0.5,0.5).

fig 10.12 The RGB Color Model

10.3.2 YIQ Color Model:

The YIQ color model is used in television broadcasting. The RGB monitor requires

separate signals for red, green, and blue components of an image. But a television

monitor uses a single composite signal. The National Television System Committee

(NTSC) color model for formatting the composite video signals is the YIQ color model,

which is based on concepts in the CIE XYZ model.

The Y component of YIQ color model is same as in the XYZ model. Luminance

information is contained in the Y parameter, while chromaticity information is

incorporated into the I and Q parameters. A combination of red, green, and blue

intensities is chosen for Y parameter to yield standard luminosity curve. Since y

contains luminosity information. Black and white television monitors use only the Y

signal. The largest bandwidth in the NTSC video signal (4 MHz) is assigned to the Y

information. Parameter I contains Orange-Cyan hue information that provides the

flesh-tone shading, and occupies a bandwidth of approximately 1.5 MHz. Parameter

Q carries green-magenta hue information in a bandwidth of about 0.6 MHz.

115

Page 116: CG3

The YIQ model uses a 3D Cartesian coordinate system, with visible subset being a

convex polyhedron that maps into RGB cube. An RGB signal can be converted to a

television signal using NTSC encoder, which converts RGB values to YIQ values.

The RGB to YIQ conversion is accomplished with the following transformation.

The quantities in the first row reflect the relative importance of green and red and

the relative unimportance of blue in brightness.

The YIQ model or an NTSC video signal can be converted to RGB signal using NTSC

decoder the YIQ to RGB conversion is accomplished with the following

transformation.

The YIQ model exploits two useful properties of our visual system. First, the system is

more sensitive to changes in luminance than to changes in hue or saturation. That is,

our ability to discriminate spatially color information is weaker than our ability to

discriminate spatially monochrome information. Second, objects that cover a very

small part of our field of view produce a limited color sensation, which can be

specified adequately with one rather than two color dimensions.

10.3.3 CMY Color Model:

The CMY color model is useful describing color output to hardcopy devices. The

subset of the Cartesian coordinate system for the CMY model is the same as that for

the RGB except that white instead of black is at the origin. A color model defined with

primary colors Cyan, Magenta, and Yellow is useful for color output devices. Unlike

video monitors, which produce a color pattern by combining light from the screen

phosphors, hardcopy output devices such as plotters produce a color picture by

counting a paper with color pigments. We see the colors reflected light, a subtractive

process.

Adding blue and green light forms cyan. When white light is reflected from Cyan

colored ink, the reflected light must have no red component. That is the ink observes

red light. Magenta ink subtracts green components from incident light and yellow

subtracts the blue component. A unit cube representation for the CMY model is

shown in the following figure.

116

Page 117: CG3

fig 10.13

Here the point origin represents white. The point (1,1,1) represents black. Since all

the components of incident light are subtracted. Cube diagonal representing gray

combination of Cyan and Magenta producing blue light. A similar subtractive process

obtains other color combinations.

Printing process with CMY model produces collection of fair ink dots, where as RGB

monitor uses a collection of three phosphor dots. One dot is used for each of the

primary colors (Cyan, Yellow, and Magenta). That is one dot is black. Since the

combination of three primary colors produces a dark gray by spraying ink over each

other and allow them to mix before they dry.

The conversion from RGB representation to a CMY representation is accomplished by

the following transformation.

Where the white is represented in the RGB system as the unit column vector.

Similarly CMY color representation to an RGB color representation is accomplished by

the following transformation.

Where black is represented in CMY system as unit column vector.

10.3.4 HSV Color Model:

The RGB, CMY and YIQ color models are hardware oriented. By contrast HSV color

model is user oriented. Here user specifies a color and the amount of white and black

to add to the color to obtain different shades, tones and tints.

The three-color parameters presented to a user are hue, saturation, and value. The

three dimensional representation of the HSV model is derived from the RGB color

model.

117

Page 118: CG3

fig 10.14 color hexagon

View the cube along its sides, giving us a shape of hexagon. This boundary

represents the various hues and used as a top of HSV hexacone.

In the hexacone, the saturation is measured along the horizontal axis and value is

along the vertical axis through the center of hexacone.

Hue is represented as an angle about the vertical axis from 00 at red through 3600

Vertices are separated at 600 intervals. Yellow at 600, green at 1200 and cyan

opposite red at 1800 complimentary colors are 1800 apart.

fig 10.15The HSV hexacone

Saturation varies from 0 to 1. S for this model is the ratio of purity of a selected hue

to its maximum purity at S=1. At S=0 gives gay scale and S=0.25 gives us hue of

one quarter pure. V varies from 0 to 1 from apex of hexagon to top apex is black.

118

Page 119: CG3

Pure hues have V=1 and at the top of the hexagon colors have the maximum

intensity.

Hue selection is done via hue angle H with V and S equal to one. Then user

describing the color desired in terms of adding either white or black to the pure hue.

Adding of black decreases the setting of V and S is Constant. To get a dark blue v

could be set to 0.4 with S=1 and H=2400 . A light blue could be designated with

S=0.3 while V=1 and H=2400 To add black and white decreases V and S.

The cross sectional plane of HSV Hexatone represents the terms tints, shades and

tones in the following figure.

Adding black to a pure hue decreases V down the side of the hexatone. Thus various

shades are represented with values S=1 and 0<=V<=1. Adding white to a pure

different tints across the top of the plane where V=1 and 0<=V<=1. Adding both

black and white, producing color points within the triangular cross sectional area of

the hexatone, specifies various tones.

Fig 10.16 cross section of hexatone

Human eye can distinguish between 128 hues, 130 tints and 23 shades. House we

can distinguish between 128*130*23=382720 different colors. For the graphic

application 128 hues and 8 saturation levels with 15 value settings are enough. For

such setting 14 bit of color storage per pixel is enough for a system to run an

application using HSV color available to a user.

119

Page 120: CG3

fig 10.17 RGB cube and its sub cube

10.3.5 HLS Color Model:

Tektronix uses HLS color model. This model has double cone representation as shown

in fig 10.18. The three-color parameters in this model are called hue, lightness and

saturation. Hue specifies an angle about the vertical axis that locates a chosen hue.

In this model H= 00 corresponds to blue. The remaining colors are specified around

the perimeter of the cone in the same order as in the HSV model. Magenta is at 60 0 ,

the red is at 1200, and cyan is located at H=1800 apart on the double cone.

The vertical axis in this model is called lightness, L. At L=0 we have black, and white

is at L=1. Gray scale is along the L axis, and the ‘pure hues’ lies on the L=0.5 plane.

Saturation parameter S again specifies purity of a color. This parameter varies from 0

to 1, and pure hues are this for which S=1 and L=0.5. As S decreases, the hues are

said to be less pure. At S=0, we have the gray scale.

fig 10.18 HLS double cone

As in the HSV model, the HLS system allows a user to think in terms of making a

selected hue darker or lighter. A hue is selected with hue angle H, and the desired

120

Page 121: CG3

shade, tint, or tone is obtained by adjusting L and S. Colors are made lighter by

increasing L and made darker by decreasing L. when S is decreased, theb coors move

towards gray.

10.4 Reproduction of Color:

Color images are reproduced in print in a way similar to that used for monochrome

images but four sets of halftone dots are printed, one for each of the subtractive

primaries and another for black. Ina process called under color removal, black

replaces equal amounts of cyan, magenta, and yellow. Thus creates a darker black

than is possible by mixing the three primaries, and hastens drying by decreasing the

amounts of cyan, magenta, and yellow ink needed.

He spatial integration of different colors is the same phenomenon we experience

when viewing the trials of red, green and blue dots on a color monitor.

The color reproduction in print and on CRT’s depends on the same spatial integration

used in monochrome reproduction. The monochrome dithering techniques can also

be used with color to extend number of available colors, again at the expense of

resolution. Consider a color display with 3 bits per pixel, one each for red, green,

blue. We can use 2×2 pixel pattern area to obtain 125 different colors. Each pattern

can display five intensities for each of red, green, and blue by using halftone

patterns. This results in 5×5×5=125 color combinations.

All color reproduction not depends exclusively on spatial integration. For instance,

inkjet plotters, xerographic color copiers, and thermal color printers actually mix

subtractive pigments on the papers surface to obtain a small set of different colors.

In xerography, the colored pigments are first deposited in three successive steps,

then are heated and melted together. The ink sprayed by the plotter mix before

drying. Spatial integration may be used to expand the color range further.

Making movie film or slides that look exactly like an image on a display is difficult,

because many variables are involved. They include the gamma correction of the

display and of the CRT used in the film recorder. The color of light emitted by the CRT

in the film recorder, the filters used in the film recorder, the type of film used, the

quality and temperature of the developing chemicals, the length of time the film is in

chemicals, and the color of light emitted by the bulb in the slide or film projector.

Fortunately, all these variables can be quantified and controlled, with considerable

difficulty.

121

Page 122: CG3

Controlling the color match on printed materials is also difficult. The printing process

with its cyan, magenta, yellow, and black primaries requires carefully quality control

to maintain registration and ink flow. The paper texture, absorbency, and gloss also

affect the result. Even if extreme care is taken in color reproduction, the result may

not seem to match the original. Lighting conditions and reflections from the display

can cause colors with the same measured chromaticity coordinates to appear to be

different.

CHAPTER 11: RENDERING

The creation of realistic pictures is an important goal in fields such as simulation,

design, entertainment and advertisement, research and education. Creating realistic

computer generated images is often an easier, less expensive and more effective

way to see preliminary results than in building models and prototypes and also allows

more alternative designs to consider. Realistic images are becoming an essential tool

in research and education. The process of producing realistic images or pictures is

called rendering. The rendering techniques are:

1.Transparency

2. Shadows

3. Texture mapping.

11.1 TRANSPARENCY:

The surface of a transparent object may receive light from behind as well from front.

The transparent objects produce both reflected and transmitted light. The relative

contribution of the transmitted light depends on the degree of transparency of the

surface and whether any light sources or illuminated surfaces are behind the

transparent surface. An example to transparent material is glass. The transparent

objects have a transparency coefficient T as well as values for the reflectivity and

secular reflection. The transparency coefficient may depend on the thickness of the

object. The transparency is modeled in two ways:

122

Page 123: CG3

1. Non-refractive transparency.

2. Refractive transparency.

11.1.1 Non-refractive transparency:

The simplest method to modeling transparency ignores refraction; so light rays are

not bent as they pass through surface. Thus, whatever is visible on the sight through

a transparent surface is also geometrically located on that line of sight. Although

refraction less transparency is not realistic, it can often be a more useful effect than

refraction. Two different methods have been commonly used to approximate the way

in which the colors of two objects are combined when one is seen through the other.

They are:

1. Interpolated Transparency.

2. Filtered Transparency.

1. Interpolated Transparency: If we consider transparent polygon 1 is between

The viewer and opaque polygon 2 as shown in fig.11.1

Fig 11.1 Cross section of two polygons

Interpolated transparency determines the shade of a pixel in the intersection of two

polygons projections by linearly interpolating the individual shades for the two

polygons.

Iλ = (1- TK1) Iλ1+ TK1 Iλ2

The transparency coefficient TK1 measures the transparency of polygon 1 , and

ranges between 0 and 1. When TK1 is 0, the polygon is opaque and transmits no

light. When TK1 is 1, The polygon is perfectly transparent and contributes nothing

to the intensity Iλ . The value (1- TK1) is called the polygons capacity. Interpolated

Transparency may be thought of as modeling a polygon that consists of a fine mesh

Of opaque material through which other objects may be seen TK1 is the fraction of

The mesh’s surface that can be seen through.

123

Page 124: CG3

2. Filtered Transparency: Filtered transparency treats a polygon as a

transparent filter that selectively passes different wavelengths. It can be

modeled by

Iλ = Iλ1+Tk1 Ot λ Iλ2

Where Ot λ is polygon 1’s transparency color. A colored filter may be Modeled by

choosing a different value of Ot λ for each λ . In either Interpolated or filtered

transparency, if additional transparent polygons are in Front of these polygons, and

then the calculation is invoked recursively for polygons in back-to-front order, each

time using the previous computed Iλ as Iλ2.

11.1.2 Refractive transparency:

Refractive transparency is significantly more difficult to model than is non- refractive

transparency, because the geometrical and optical lines of sight are different. If

refraction is considered in fig.11.2, object A is visible through the transparent object

along the line of sight shown. If refraction is ignored, object B is visible.

Fig 11.2 Refraction

The relation between the angle of incident θi and the angle of refraction θi is given

by Snell’s law

sin θi/ cos θt = ηtλ/ ηiλ

Where ηiλ and ηtλ are the indices of refraction of the material through which the light

passes. It varies with the wavelength of the light and even with temperature. A

vacuum has an index of refraction of 1.0, as does the atmosphere to close

approximation, all materials have higher values. The index of refraction’s

wavelength-dependence is evident in many instances of refraction.

11.2 SHADOWS:

The position of the observer and the light source separate than shadows appear. If

observer position is coincident with the light source then no shadows appear. Visible

124

Page 125: CG3

surface algorithms determine which surfaces can be seen from the viewpoint.

Shadow algorithms determine which surfaces can be seen from light source. The

surfaces that are visible from the light source are in shadow, but these that are not

visible from the light source are in shadow.

11.2.1 Applications of Shadows:

Shadows useful in

Simulation,

Air conditioning

Solar calculations for building and spaced craft design applications.

A Shadow consists of two parts:

a) An umbra: The central dense, black, sharply defined shadow area is the umbra.

b) Penumbra: The lighter area surrounding the umbra is called the penumbra.

The point light sources generally used in computer graphics generate only umbra

shadows. For distributed light sources of finite dimension both umbra and penumbra

shadows result.

There are two types of shadows:

a)Self Shadows: Self Shadows result when the object itself prevents light from

reaching some of its planes. Self-Shadow planes are self-hidden planes when the

scene is viewed from the position of the light source.

b)Projected Shadows: A Projected Shadow results when the intervening object

prevent light from reaching another object in the scene. The shadow on the base

plane in fig.11.2 is an example. Projected Shadows are found by projecting all non

self hidden planes into the scene from the position of light source.

11.2.2 Shadow Algorithms are:

1. Scan – Line Generation of Shadows.

2. A Two-Pass Object-Precision Shadow Algorithm.

3. Shadow Volumes.

4. A Two-Pass Z-Buffer Shadow Algorithm.

11.2.2.1 Scan – Line Generation of Shadows:

One of the oldest methods for generating shadows is to augment a scan-line

algorithm to interleave shadow and visible surface processing. Using the light source

as a projection, the edges of polygons that might potentially cast shadows are

projected onto the polygons intersecting the current scan line. When the scan

crosses once of these shadow edge, the colors of the image pixels are modified

accordingly. A brute=force implementation of this algorithm must compute all n(n-1)

125

Page 126: CG3

projections of every polygon on every other polygon. Backlight and Kelly instead use

a clever preprocessing step in which all polygons are projected onto a sphere

surrounding the light source, with light source as center of projection. Pairs of

projections whose extends do not overlap can be eliminated, and a number of other

special cases can be identified to limit the number of polygon pairs that need to be

considered by the rest of the algorithm. Then they compute the projection from the

light source of each polygon onto the plane of each of those polygons that they

determined it could shadow, as shown in fig11.3. Each of this shadowing polygon

projection has associated information about the polygons costing and potentially

receiving the shadow. While the scan-line algorithm regular scan keeps track of

which shadowing polygon projection edges are crossed, and thus which shadowing

polygon projections the shadow scan is currently ‘in’. When the shade for span is

computed, it is in shadow if the shadow scan ‘ in’ once of the shadow projection cost

on the polygons span , thus span bc is in shadow while spans ab and cd are not.

Fig 11.3 Scan Line Shadow Algorithm

11.2.2.2 A Two-Pass Object-Precision Shadow Algorithm:

Atherton, Weiler and Greenberg develop this algorithm. This algorithm performs

shadow determination before visible-surface determination. They process the object

description by using the same algorithm twice, once for the view port, and once for

the light source.

The results are then combined to determine the pieces of visible part of a polygon

that are lit by the light source and the scene is scan converted. Thus, since the

shadow are not dependent on the view port, all the shadow calculations may be

126

Page 127: CG3

performed just once for a series of images of the same objects seen from many

different view points, as long as the light source and objects are fixed.

The algorithm, shown in overview in fig 11.2., first determines those surfaces that are

visible from the light source’s view point, using the Weiler-Atherton visible-surface

algorithm. The output of this pass is a list of lit polygons, each of which is tagged with

the identity of its parent polygon. All the objects must fit into light source’s view

volume, since parts of the objects, multiple non overlapping view volumes can be

constructed that radiate out from the light source, a technique called sectoring.

Hidden surface removal is then performed on a copy of this merged database from

the viewpoint of an arbitrary observer, again using the Weiler-Atherton algorithm. A

polygon scan-conversion algorithm is then used to render the image. Visible surfaces

covered by surface detail polygons are rendered as lit, whereas uncovered visible

surfaces are rendered in shadow.

11.2.2. 3.Shadow Volumes:

Cross-describes hoe to generate shadows by creating for each object a

shadow volume that the object blocks from the light source. A shadow volume if

defined by the light source and an object and is bounded by a set of invisible shadow

polygons. As shown in fig11.4. there is one quadrilateral shadow polygon for each

silhouette edge of the object relative to the light source. Three sides of a shadow

polygon are defined by a silhouette edge of the object and the two lines emanating

from the light source and passing through that edge’s endpoints.

Each shadow polygon has a normal that points out of the shadow volume.

Shadow volumes are generated only for polygons facing the light. In the

implementation described by Bergeron , the shadow volume and hence each of its

shadow polygons is capped on one end by the original object polygon and on the

other end by a scaled copy of the object polygon whose normal has been inverted.

This scaled copy is located at a distance from the light beyond which its attenuated

energy density is assumed to be negligible.

We can think of this distance as the light’s sphere of influence. Any point out

side if the sphere of influence is effectively in shadow and does not require any

additional shadow processing. In fact, there is no need to generate a shadow volume

for any object wholly outside the sphere of influence. We can generalize this

approach to apply to no uniformly radiating sources by considering a region of

influence; The shadow volume may also be further clipped to the view volume if the

view volume is known in advance. The cap polygons are also treated as shadow

polygons by the algorithm.

127

Page 128: CG3

Fig 11.4

Shadow polygons are not rendered them, but are used to determine whether the

other objects are in shadow. Relative to the observer, a front-facing shadow polygon

(polygon A or B in fig11.4) causes those objects behind it to be shadowed. A back-

facing shadow polygon (polygon C) cancels the effect of a front-facing one. Consider

a vector from the view point V to a point on an object. The pint is in shadow if the

vector intersects more front facing than back facing shadow polygons. This is the

only case in which a point is shadowed when V is not shadowed. Therefore, point B is

lit. If V is in shadow, there is one additional case in which a point is shadowed, when

all the back-facing shadow polygons for the object polygons shadowing the eye have

not yet been encountered.

We can compute whether a point is in shadow by assigning to each front-facing

(relative to the viewer) shadow polygon a value of +1 and to each back-facing

shadow polygon a value of -1. A counter is initially set to the number of shadow

volumes that contain the eye and is incremented by the values associated with all

shadow polygons between the eye and the point on the object. The point is in

shadow if the counter is positive at the point. The number of shadow volumes

containing the eye is computed only once for each view point, by taking the negative

of the sum of the values of all shadow polygons intercepted by an arbitrary projector

from the eye to infinity.

Although it is possible to compute a shadow volume for each polygon, we can take

advantage of object coherence by computing a single shadow volume for each

connected polyhedron. This can be accomplished by generating shadow polygons

from only those edges that are silhouette edges relative to the loght source, these

are the contour edges relative to the light source.

128

Page 129: CG3

Multiple light sources can be handled by building a separate set of shadow volumes

for each light source making the volume’s shadow polygons with their light source

identifier, and keeping a separate counter for each light source.

11.2.2.4. A Two-Pass Z-Buffer Shadow Algorithm:

Williams developed a shadow generation method based on two passes through a Z-

buffer algorithm, one for the viewer and one for the light source. His algorithm, unlike

the two-pass algorithm, determines whether using image-precision calculations

shadows a surface. The algorithm begins by calculating and storing just the Z-buffer

for the image from the viewpoint of the light, increasing intensities represent

increasing distance. Next, the Z-buffer and the image are calculated from the

viewpoint of the observer using a Z-buffer algorithm with the following modification.

Whenever a pixel is determined to be visible, its object precision coordinates in the

observers view are transformed into coordinates in the light source’s view. The

transformed coordinates are used to select the value ZL in the light source’s Z-buffer

to be compared with the transformed value z10 . If ZL is closer to the light than is z1

0 ,

then there is something blocking the light from the point, and the pixel is shaded as

being in shadow. Otherwise the point is visible from the light and it is shaded as lit.

Like the regular Z-buffer visible surface algorithm, this algorithm requires that each

rendered pixel be shaded. Here, this means that shadow calculations must be

performed for the pixel, even it is ultimately painted over by closer objects. Williams

has suggested a variation on his algorithm that exploits the ease with which the Z-

buffer algorithm can interleave visible-surface determination with illumination and

shading, and eliminates shadow calculations for obscured objects. Rather than

computing just the shadow map first, the modified algorithm also computes the

regular shaded image from the observer’s point of view.

Unlike the other shadow algorithms, William’s algorithm makes it especially easy to

generate shadows for any object that can be scan-converted including curved

surfaces.

11.3 TEXTURE MAPPING:

In computer graphics, the surface detail in an object is called texture. The

introduction of textures in the rendering techniques has been made by applying two-

dimensional textures on a 3D object to gain in realism. Two aspects of texture are

129

Page 130: CG3

generally considered. The first is the addition of a separately specified pattern to a

smooth surface. After the pattern is added the surface still appears smooth. The

second is adding the appearance of roughness to the surface. Adding the appearance

of roughness to a surface is basically a perturbation function.

Since the basis of adding textures patterns to smooth surface is mapping. The

texture proble4m reduces to transformation from one coordinate system to another.

If the texture pattern is defined in an orthogonal coordinate system (m,n) in texture

space, and a surface in second orthogonal coordinate system (θ,Φ) , then adding the

texture pattern to the surface involves determining or specifying a mapping function

between the two spaces that B

Θ=f(m, n) Φ=g(m, n )

Or alternately m=r(Θ,Φ) n= s(m, n )

It is not necessary to assume mapping function to be linear,

Θ = Am + B Θ = C n + D

Where A,B,C,D are constants obtained from the relationship between known points in

the two coordinate system.

Fig 11.5 Mapping

The texture pattern shown in above fig 11.5 is a simple mathematical definition.

Other sources of texture might be hand-drawn artwork or scanned-in photographs or

other patterns.

The texture mapping techniques are

Texture Subdivision Algorithm.

Texture by Inverse Pixel Mapping.

CHAPTER 12: ANIMATION

Computer animation generally refers to any time sequence of visual changes in a

scene. In addition to changing object position, it also includes the time varying

position, shape, color, transparency, structure, texture and even changes of the

130

Page 131: CG3

rendering technique. Changing camera position, orientation and focal length can also

generate computer animation.

12.1 Types of Animation:

Depending upon the usage of computer, the animation techniques are classified into

two types. They are:

Conventional animation.

Computer animation.

12.1.1 Conventional Animation:

When creating an animation movie or picture first we script for the story, then

sequence of pictures is drawn to show moments in the story. This sequence of

pictures is called storyboard. Once the storyboard is created, the actual animation

process begins. It involves hundreds or thousands of individual pictures called

frames. These frames are recorded on videotape.

The final animation is achieved by filling the gaps between adjacent key frames with

in between frames. In between frames are easier to produce than key frames.

12.1.2 Computer Assisted Animation:

Many stages of conventional animation seem ideally suited to computer assistance ,

especially in building and colony, which can be done using seed filling techniques.

Before the computer can be used, the drawings must be digitized. This can be done

by using optical scanning, by tracing the drawings with a data tablet, or by producing

the original drawings with a drawing program in the first place.

12.2 Applications of Animation:

Animation is widely used in the

Entertainment industry.

Education.

Advertising.

Movies and Military applications.

Industrial applications such as control systems and flight simulators and in

scientific research.

Computer animation can be used to make vary exciting and fun videos into which

education can easily be incorporated. It is much more interesting to beam using nice

letters and colorful images on the screen instead of using white paper and pen.

Subjects such as science, English, music and art can also be taught by using

computer animation. Instructors can also use computer animation to demonstrate

things visually exactly how they want to present. It can be used to show how things

come together and work together.

131

Page 132: CG3

One of the most popular applications of computer animation is in Advertising. Using

computer animation the objects are modeled according to requirements and

incorporated with live video. In most cases, advertising needs media to be inserted.

Usually animation is used to a new level in their film. Sometimes the animation

details is so high, that viewers cannot tell what they are seeing is real or not.

Computer animation has become regular and popular in special effects. Movies such

as Jurassic park, Terminator2, The lord of the rings have brought computer animation

to ma level in their films. Scale models are a fast and cost effective method for

creating large alien scenes. But animation has done just as well as in animating fire,

smoke, humans, explosions.

Usually computer animation is only used when the scene needed would be

impossible or very difficult to create without it.

12.3 Animation Languages:

There are many different languages for describing animation and new languages are

constantly being developed. All these languages are fall into three categories.

1. Linear list notations.

2. General-purpose languages.

3. Graphical languages.

12.3.1 linear list notations:

In this category of languages, a starting and ending frame number and an action that

is to be taken place describe each event in the animation. The example of action is

23, 33, A ROTATE POLYGON, 1, 20

The above action indicates between 23 and 33 frames, rotate the object POLYGON

about axis 1 by 20 degrees determining the amount of rotation at each frame from

table A. Thus, the actions are given interpolation methods to use and objects to act

on as well. If two actions are applied to the same object at same time, the order may

matter rotating 900 in X and then 900 in Y different from rotating 90 in Y and then 900

in X.

12.3.2 General-purpose languages:

In general purpose languages, the values of variables in the language can be used as

parameters to whatever routine actually generate animation, so the high level

languages can actually be used to generate simulations that then generate

animations as a side effect. Such languages have great potential, but most of them

require considerable programming expertise on the part of the uses. such systems

can use the constructs of the surrounding language create concise routines that

have complex language. ASAS is an example to such a language. It is built on top of

132

Page 133: CG3

LISP, and its primitive entities include vectors, colors, polygons, solids, group points

of view and light.

The following program statement is ASAS program fragment, describing an animated

sequence in which an object my-polygon is spun while the camera pans.

(Grasp my-polygon); The polygon becomes current object

(CW 0.05); Spin it clockwise by a small amount

(grasp camera); Make the camera the current object

(right panning speed); Move it to the right direction

In the above routine anything following a semicolon is a comment. This fragment is

evaluated at each frame in order to generate the entire sequence.

12.3.3 Graphical Languages:

Graphical animation languages described animation in a more visual way. These

languages are used for expressing, editing and comprehending the simultaneous

changes taking place in an animation. The principal notation in such languages is

substitution of a visual program for a textual one rather than explicitly writing at

descriptions of actions, the animator provides a picture of the action. Baeker

introduced understanding of p-curves in the GENESYS animation system initially in

this area. A p-curve is parametric representation of the motion of an object or

assembly of objects within a scene. The animator describes an object path of motion

by graphically its coordinates as a function of time. This is demonstrated in the below

figure12.1.The figure12.1 (a) shows a motion path in the plane.Figure12.1 (b) shows

the path x and y components as function of time. The curves in figure12.1 (b)

uniquely determine the curve in figure12.1 (a), but reverse is not true, one can

traverse the paths in figure12.1 (a) at different speeds. By marking the path in figure

12.1(a) to indicate constant time step, we can convey the time independence of path

as shown figure12.1(c) , which is called a p-curve. Figure 12.1(c) can be constructed

as shown in figure12.1 (d) by graphing the x and y components as components as

functions of t, on coordinate systems that are rotated 900 from each other, and then

drawing lines to connect corresponding time points. Thus editing components of

133

Page 134: CG3

parametric curve includes changes in p-curve , and editing the placements of hash

marks on the p-curve include changes in the components.

(a) (b) (c) (d)

y

y

t t

t t

fig12.1 (d)

2.4 Methods of controlling animation:

Controlling an animation is somewhat independent of the language used for

describing animation. But most of the control mechanisms can be for use with

various types of languages.

Procedural control.

Full Explicit control.

Constraint Based systems.

Tracking Live Action.

Actors.

Kinematics and Dynamics.

Procedural Control:

The procedural control is ideally suited to control animation. In this procedural

modeling techniques are used. Reeves and Blau modeled both grass and wind in this

way, using a particle system modeling technique. The wind particles evolved over

time in the production of the animation, and the positions of the grass was affected

by aspects of other objects in the scene. This sort of procedural interaction among

objects can be used to generate motions that would be difficult to specify through

explicit control. Unfortunately, it is requires that the animator be a programmer.

Procedural control is a significant aspect of several other control mechanisms.

Full Explicit control:

134

t t

Page 135: CG3

Explicit control is the simplest sort of animation control. Hare the animator provides a

description of everything that occurs in the animation, either specifying simple

changes such as scaling, translation, and rotation or by providing key frame

information and interpolation methods to use between key frames. This interpolation

may be given explicitly or by direct manipulation with a mouse, joystick , data glove

or other input device.

Constraint-Based systems:

Some objects in the physical world move in straight lines, but a great move in a

manner determined by the other objects with which they are in contact, and this

compound motion may not be linear at all. For example, a ball rolls down an inclined

plane. If gravity were the only force acting on the ball, the ball would fall straight

down. But the plane is also pushing up and side ways, and so the ball rolls down the

plane rather than passing through it. We can model such motion by constraints. The

ball is constrained to lie on the side of the plane. If it is dropped from a height , it

strikes the plane. In a similar way, a pendulum swings from a pivot, which is a point

constraint.

Specifying an animated sequence using constraints is often much easier to do then

specifying by using directs control. When physical forces define the constraints, we

move into the realism of physically based modeling, especially when the dynamics of

the objects are incorporated into the model. Constraints on a linkage are used to

define its possible positions as shown in figure 12.2(a).We can view an animation of

the linkage by changing it in a simple way. In the figure, for example, the animator

can generate an animation of the linkage of the just by rotating the drive wheel, as

shown in figure 12.2 (b),(c),(d).

(a) (b)

( c) ( d )

Fig 12.2 (a) the linkage is moved by rotating the drive wheel

(b), (c), (d) constraints generate the motions

135

Page 136: CG3

The first used constraint based animation of this sort is Sutherland’s sketchpad

systems. It allowed the user to generate parts of an assembly in the same way as 2D

drawing programs do today. The parts of an assembly could be constrained by point

constraints, linkage constraints or regular constraints. This allowed the user to draw

four lines in a quadrilateral, put linkage constraint on the corners, to put a point

constraint on one corner, and to put angular constraint on opposite sides to make

them parallel. This generated a parallelogram with one corner fixed. Constraints were

satisfied by a relaxation technique in which assembly was moved so as to satisfy

constraints gradually. Of course it is possible to cover constrain a system, by

requiring, for example , that a line have a length of one unit, but that it ends be

joined to two points that are three units apart. The constraints in sketch pad are

described by giving an error function. Error function whose value is 0 when a

constraint is satisfied. Otherwise positive relaxation attempts to make sum of these

functions 0; When it fails, many constraints may be unsatisfied. In this case, the

relaxation technique finds one solution that is close to initial configuration.

Tracking Live Action:

Trajectories of objects in the course of animation can also be generated by tracking

of live action. There are number of methods for doing tracking. Traditional animation

has used rotoscoping. A film is made in which people act out the parts of the

characters in the animation, then animators draw over the film, enhancing the

backgrounds and replacing the human actors with their animation equivalents. This

technique provides exceptionally realistic motion.

Actors:

The use of actors is a high level form of procedural content. An actor is in animation

is a small programme invoked once per frame to determine the characteristics of

some objects in the animation. An actor, in the course of its once per frame

execution, may send messages to other actors to control their behaviors.

Kinematics of controlling animation:

In order to enter the military, one has to go through a lot of training. Depending on

whether you want to be in the army, navy or the, marines, you might be working with

equipment worth hundreds of thousands or even millions of dollars. The military

wants to be sure you know how to use this equipment before they actually let you

use it. Computer animation can also be used to simulate the landscape in which an

operation will be going on. A satellite altitude can be converted into a 3D model using

software and then animated with trees and under different weather.

136

Page 137: CG3

12.5 Basic levels of Animation:

1.Squash and Stretch: This is used to introduce the physical properties of an

object by distortions of shape. A rubber ball or a ball of putty both distort when

dropped on the floor. A bouncing rubber ball might be shown as elongating as it

approaches the floor , flattering out when it hits , and then elongating again as it

rises.

2.Slow in and Slow out: To help smooth interpolation. Sudden, jerky motions are

extremely distracting. This is particularly evident in interpolating the camera

position. An audience viewing an animation identities with the camera view, so

sudden changes in camera position may take the audience feel motion sickness.

Thus, camera should be as smooth as possible.

3. The 2D character animation world to 3D animations.

12.6 Problems peculiar to Animation:

Moving from two-dimensional graphics to three dimensio0nal graphics we

encountered many new problems. Now in animation special problems are

encountered. They are

Temporal aliasing.

Rendering.

The problems in animation can be solved by temporal antialiasing. Temporal

antiaiasing can be done by taking multiple samples of a signal and computing their

weighted average. In this case, however the multiple samples must be in the time

direction rather than in the spatial direction, so we compute the intensity at a point in

the image for several sequential times and weigh these to get a value at a particular

frame. Many approaches to temporal aliasing problems have been developed. These

are super sampling, box filtering in the time domain, and all other tricks from spatial

antialiasing have been applied.

In 2D and 3D aliasing problems are partially solved by increasing the screen

resolution.

137

Page 138: CG3

OBJECTIVE QUESTION BANK

UNIT-11 Stair case effect or jagged effect obtained in a line displayed on

the screen is due to the phenomenon. [ ]

A)Aliasing B)anti Aliasing

C)compression D)inversion2 Replacing an interior region of a polygon with a specified color is

done by_____ filling. [ ]

A)Boundary B)scanlineC)Flood D)none

3 Bresenham’s circle drawing algorithm plots points using____symmetry,

[ ]

A)8 point B)4 point C)2 point D) 16 point

4 Bresenham’s ellipse drawing algorithm plots points using______symmetry

[ ]

138

Page 139: CG3

A)8 point B)4 pointC)2 point D)16 point

5 A triangle is a_________type of polygon. [ ]A)hybrid B)concaveC)convex D)none

6 Even-Odd and Winding number methods always produce same results.

[ ]

A) true B)falseC)cannot say D)none

7 Write circle equation with center (Xc,Yc) and radius r [ ]A)(X-Xc)2+(Y-Yc)2=r2 B)(X-Xc)2+r2=(Y-Yc)2

C)(Y-Tc)2+r2=(X-Xc)2 D)none8 _________method creates characters out of series of line

segments.[ ]

A)stroke B)bitmapC)aliasing D)none

9 By considering the symmetry of the circles computation can be _______.

[ ]

A)reduced B)increasedC)remains constant D)none

10 Lines have a _______appearance when they step from one row or column to the next.

[ ]

A)jagged B)straightC)curved D)none

11 A__________ is a polygon in which line segment joining any two points with in the polygon lies completely inside the polygon.

[ ]

A)convex B)concave

C)both D)none12

A ____________is a polygon in which the line segment joining any two points with in the polygon may not lie completely inside the polygon.

[ ]

A)single layer phosphor B)multi layer phosphor

c)RGB color mixing D)none13

Beam penetration CRT generate color displays by using__. [ ]A)polygon drawing primitive B)trapezoid primitive

C)line and points D)none

14 What are the seed fill algorithm[ ]

A)boundary fill B)flood fill C)both D)none

15 The aspect ratio of monitor is defined as [ ]A)X:Y B)Y:X

139

Page 140: CG3

C)X*Y D)none

16 A lossy image compression algorithm will ___________the no. of pixels in original image.

[ ]

A)increase B)decreaseC)not alter D)none

17 Using Halftoning, the no.of intensity levels possible for a square of size ‘n’ are

[ ]

A)2n B)2n C)2n+1 D)2n+1

18 _________displays are devices that convert electrical energy into light.

[ ]

A)Emissive B)non emissive

C)both D)none

19 The point half way between x-axis and the point half way between y-axis is the _________.

[ ]

A)line B)mid pointC)point D)none

20 ___________is a smallest addressable point on the screen [ ]

A)pixel B)frame buffer

C)both D)none

UNIT–21 The method for selecting and enlarging portions of a drawing is

called______________[ ]

A) Clipping B) WindowingC) Viewport D) None

2 The Technique of not showing that part of the drawing which one is not interested in is called __________________

[ ]

A) Clipping B) WindowingC) Viewport D) None

3 An area on a display device to which a window is mapped is called ________

[ ]

A) Clipping B) TransformationC) Viewport D) None

140

Page 141: CG3

4 Any procedure that identifies portions of a picture that are either inside or outside of a specified region of space is called _______

[ ]

A) Clipping B) TransformationC) Viewport D) None

5 Which of the following is not a primitive type of clipping? [ ]A) Line Clipping B) Polygon ClippingC) Curve Clipping D) None

6 Which of the following is a line clipping algorithm? [ ]A) Cohen - Sutherland B) Sutherland - HodgemanC) Point Source D) None

7 Which of the following is a Polygon clipping algorithm? [ ]A) Cohen - Sutherland B) Sutherland - HodgemanC) Point Source D) None

8 In Cohen – Sutherland algorithm every line end poing in a picture is assigned a four digit binary code called as ___________________

[ ]

A) Line Code B) Region CodeC) Window Code D) None

9 An area on a display device to which a window is mapped is called __________

[ ]

A) Viewport B) SegmentC) Window D) None

10 The region in which an object is clipped is called________________ [ ]A) Clip Window B) View portC) Visible Segment D) None

11 Undoing the effect of transformation is done by___________transformation

[ ]

A)shear B)scaleC)translation D)inverse

12 Reflection of a point (x,y)about the origin transforms the point as [ ]A)(y,x) B)(-x,-y) C)(-y,-x) D)(x,y)

13 _______ is a process of changing the position of an object. [ ]A)translation B)rotationC)scaling D)all

14 ________ is the process of expanding or compressing the dimensions of an object.

[ ]

A)translation B)rotationC)scaling D)all

15 Rotation about an arbitrary point , the following steps are

done.

[ ]

A)T1*R*T2 B)T1*S*T2

C)T1*R*R D)T1*T2*r

16 A ______is a transformation that produces a mirror image of an object. [ ]

A)shear B)reflection

C)both D)none

141

Page 142: CG3

17 A Transformation that slants the shape of an object is

called the_______.

[ ]

A)shear B)reflection

C)both D)none

18 Increasing or decreasing the magnification of the

displayed objects is known as______.

[ ]

A)shear B)reflection

C)zooming D)none

19 Zooming can be performed dynamically using the _____. [ ]

A)keyboard B)mouse

C)both D)none

20 The_____ shear preserves the x coordinates but changes

the y values.

[ ]

A)y B)x

C)xy D)none

UNIT-31 The technique of transforming a three dimensional object into a

two dimensional screen is called as ____________________[ ]

A) Projection B) TransformationC) Rotation D) None

2 Which of the following is not a 3D primitive? [ ]A) MOVE_ABS_3(x,y,z) B) MOVE_REL_3(dx,dy,dz)C)POLYGON_REL_3(ax,ay,az,n) D) None

3 Which of the following not a basic 3D transformation? [ ]A) Translation B) ScalingC) Rotation D) None

4 If the direction of projection is not parallel to the view plane normal then it is called as ________________________

[ ]

A) Cavilier Projection B) Oblique ProjectionC) Cabinet Projection D) None

5 A projection in which all the three principal axes are equally shortened is called __________________________________

[ ]

A) Isometric B) TrimetricC) Dimetric D) None

142

Page 143: CG3

6 A projection in which none of the three edge directions is equally shortened is called ______________________

[ ]

A) Isometric B) TrimetricC) Dimetric D) None

7 Which of the following algorithm uses both image space and object space operations?

[ ]

A)Back Face removal algorithm B) Z-Buffer algorithmC) Warnock’s Algorithm D) None

8 Which of the following algorithm uses both image space and object space operations

[ ]

A) Scan Line algorithm B) Painter’s algorithmC) Z – Buffer algorithm D) None

9 The technique of linearly interpolation intensity values across the polygon surface is called_____________________

[ ]

A) Interpolated shading B) Gouraud shadingC)constant shading D) None

10 A parallel projection preserves _______________of objects [ ]A) Elevation B) relative proportionC) orthogonal proportion D) None

11 Orthogonal projections are often used to produce the front side and________ of an object

[ ]

A) plane view B) top viewC) bottom view D) None

12 The projection often used in drafting is called as _________________ [ ]A) Perspective Projection B) Parallel ProjectionC) Axonometric Projection D) None

13 _______________________Perspective projection occurs when one of the faces of a rectangular object is parallel to the view plane.

[ ]

A) One point B) Two pointC) Three point D) None

14 Z – buffer algorithm is also known as ______________________ [ ]A) Warnock’s B) Area SubdivisionC) Depth Buffer D) None

15 Painter’s algorithm is also known as _______________________ [ ]A) Area sub division B) Warnock’sC) Depth Sorting D) None

16 Surface rendering procedures are termed as __________ methods. [ ]A) Shading B) IlluminationC) Half toning D) None

17 A fast and simple method for rendering an object with polygon surface is _______________________

[ ]

A) Constant intensity shading B) Gouraud ShadingC) Interpolated shading D) None

18 The back face detection and removal algorithm is sufficient for ______objects.

[ ]

A)single concave B)single convexC)both D)none

19 In ________algorithm , sort the polygons foe display in back-to-front order.

[ ]

A)BSP B)painter’sC)Warnock’s D)none

20 For determining the relevancy and relative position of two polygons __ technique cn be used.

[ ]

A)minimax B)boxing testC)both D)none

143

Page 144: CG3

UNIT-4

1 The convex polygon boundary that encloses a set of control points is called _____________________

[ ]

A) Spline Curve B) Convex hull C) Clip boundary D) None

2 The ________________ function tells, how hard the ith sample point is pulling the curve in the interpolation process.

[ ]

A) Control function B) Blending functionC) Parametric function D) None

3 An interpolating piecewise cubic polynomial with a specified tangent at each control point is called ___________________

[ ]

A) Hermite Spline B) Beizer SplineC) Continuous Spline D) None

4 A flexible strip used to produce a smooth curve through a designated set of points is called ____________________

[ ]

A) Control Point B) SplineC)Convex Hull D) None

5 A Spline curve is specified by giving a set of coordinate positions called _________________

[ ]

A) Isolating points B) Control pointsC) Blending function D) None

6 The general shape of the curve is indicated by __________________ [ ]A) Isolating points B) Control pointsC) Blending function D) None

7 A spline curve is defined, modified and manipulated with operations on _________________

[ ]

144

Page 145: CG3

A) Isolating points B) Control pointsC) Blending function D) None

8 ____________splines are only dependent on its end point constraints [ ]A) Hermite B) BezierC)Integrated D) None

9 A Bezier curve can be specified with boundary conditions with characterizing matrix or with __________________________

[ ]

A) Isolating points B) Control pointsC) Blending function D) None

10 A very useful property of a Bezier curve is that it always passes through __________________ control points

[ ]

A) Middle B) First and LastC) All D) None

11 _____________________allow local control over the spline curve or surface.

[ ]

A) Hermite Spline B) B - SplineC) Bezier Spline D) None

12 ____________________Property of a Bezier curve eliminates erratic Oscillations of a polygon.

[ ]

A) Convex Hull B) Dependent of Control PointsC) Blending function D) None

13 A Bezier Curve is a polynomial of degree one less than the no.of ________used.

[ ]

A)Blending functions B) PlanesC) Control Points D) None

14 Cubic Bezier curves are generated with ____________________point. [ ]A) One control B) Two controlC) Three control D) Four control

15 A B-spline Curve with n+1 control points, the curve is described with _______________ blending functions.

[ ]

A) n B) n - 1C) n + 1 D) NoneC) SRAM D) None

16 __________________instruction fills a rectangular array of pixels with solid color

[ ]

A) FILL B) FLINEC)DRAV D) PIXBLT

17 Which of the following is a color model? [ ]A) YIQ B) CMYKC) Both (A) & (B) D) None

18 The method for explaining the properties of color within some particular context is called as _________________

[ ]

A) Color model B) Half toningC) Illumination D) None

19 In RGB color space red is measured as ______________ [ ]

A) X-axis B) Y-axisC) Z-axis D) None

20 In RGB color space blue is measured as_____________ [ ]A) X-axis B) Y-axisC) Z-axis D) None

145

Page 146: CG3

UNIT –51 Starting and ending frame number and an action that is to be

taken place describe each event in the-____________ animation programming language.

[ ]

A)Linear list notations B)General-purpose languages

C)Graphical languages D)none2 In____________ language, the values of variables in the language

can be used as parameters to whatever routine actually generate animation.

[ ]

A)Linear list notations B)General-purpose languages

C)Graphical languages D)none3 ._____________ languages are used for expressing, editing and

comprehending the simultaneous changes taking place in an animation.

[ ]

A)Linear list notations B)General-purpose languages

C)Graphical languages D)none4 ._____________level of animation is used to introduce the physical

properties of an object by distortions of shape.

[ ]

A)Squash and Stretch B) Slow-in and Slow-out

C)both D)none5 .__________level of animation is used for smooth interpolation. [ ]

A)Squash and Stretch B) Slow-in and Slow-out

C)both D)none6 The problems peculiar to animation are______. [ ]

A)temporal aliasing B)Rendering.

146

Page 147: CG3

C)both D)none7 The process of producing realistic images or pictures is called

rendering.[ ]

A)rendering B)aliasingC)both D)none

8 The central dense, black, sharply defined shadow area is__________.

[ ]

A)umbra B)penumbra C)both D)none

9 The position of the observer and the light source separate ________ appears.

[ ]

A)transparent B)shadow C)both D)none

10 .________ An example to transparent material. [ ]A)glass B)waterC)both D)none

11 _________plane are self hidden planes when the scene is viewed from the position of the light source.

[ ]

A)Self-Shadow B)projected shadows C)both D)none

12 In computer graphics, the surface detail in an object is called ________.

[ ]

A) texture B)shadow C)transparency D)none

13 _______transparency treats a polygon as a transparent filter that selectively passes different wavelengths.

[ ]

A)filtered B)interpolatedC)both D)none

14 __________ are specialized animation languages designed simply to generate the in-betweens from user specified key frames.

[ ]

A)Key frame B)ParameterizedC)both D)none

15 ___________systems allow object motion characteristics to be specified as

a part of the object definition.

[ ]

A)Key frame B)ParameterizedC)both D)none

16 In animation the sequence of pictures is called ______. [ ]

A)story board B)pixel

C)both D)none

17 The __________is ideally suited to control animation. [ ]

A)procedural control B)full explicit control

C)both D)none

18 An ______is in animation is a small programme invoked once per frame

to determine the characteristics of some objects in the animation

[ ]

A)actor B)tracking live action

C)both D)none

147

Page 148: CG3

19 In 2D and 3D aliasing problems are partially solved by increasing

The_________.

[ ]

A)resolution B)aliasing

C)both D)none

20 ________is a process of creating moving objects. [ ]

A)animation B)redering

C)both D)none

Key:

Unit-1

1) A 2) C 3)A 4)B 5)C 6)A 7)A 8)A 9)A 10)A11)B 12)B 13)C 14)C 15)B 16)B 17)D 18)A 19)A 20)A

Unit-2

1) B 2) A 3)C 4)A 5)D 6)A 7)B 8)B 9)A 10)A11)D 12)B 13)A 14)C 15)A 16)B 17)A 18)C 19)B 20)A

Unit-3

1) A 2) D 3)D 4)B 5)A 6)B 7)C 8)B 9)B 10)B11)B 12)C 13)A 14)C 15)C 16)A 17)A 18)B 19)A 20)C

UNIT-4

1) B 2) B 3)A 4)C 5)C 6)C 7)B 8)A 9)C 10)B11)B 12)A 13)C 14)D 15)C 16)A 17)C 18)A 19)A 20)C

UNIT-5

1) A 2) B 3)C 4)A 5)B 6)C 7)A 8)A 9)B 10)A11)A 12)A 13)A 14)A 15)B 16)A 17)B 18)A 19)A 20)AC

148

Page 149: CG3

QUESTION BANK

Unit-1

1. Differentiate between interactive and passive graphics.

2. What is computer graphics? What are the major considerations in the

study of computer graphics?

3. Explain operating characteristics of flat panel display.

4. Compare CRT displays and flat panel display devices.

5. Write about graphic standards.

6. Compare two techniques available with CRT displays.

7. Write advantages of LCD displays.

8. What is the role of the frame buffer in displays.

9. Explain the following.

a) Joystick.

b) Digitizer.

c) Light pen.

10. Explain how computer graphics is used in scientific applications.

11.Write about Emissive and Non Emissive display devices.

12.Explain the features of PHIGS graphics software.

13.Explain the working principle of DVST.

14.Write short notes on plotter device.

15.Explain advantages of Bresenham line generation algorithm.

16.Compare different character generation methods.

17.Explain midpoint circle generation algorithm.

18.What is aliasing ? and how it can be removed.

19.Write a algorithm to generate ellipse.

20.Compare circle drawings algorithms.

21.Explain the features of DDA algorithm for line generation.

22.Draw a polygon and Bresenham line generation algorithms.

23.What are the different methods to fill a polygon.

149

Page 150: CG3

24.Explain about inside and outside testing.

25.Explain scan-line algorithm for filling a polygon.

Unit-2

1. What are composite transformations?

2. Define shearing transformation.

3. Show hoe shear transformation can be expressed in terms of rotation and

scaling.

4. Write the form of the scaling matrix with respect to a fixed point (l,m),

5. Show that transformation matrix for a reflection about a line y=x is

equivalent to reflection x-axis followed by counter clockwise rotation of 900

6. Find out the new coordinate of a polygon bounded by the coordinates

(2,2), (3,4), (5,7), (10,3) when rotated about a point (2,2) by 300 in

clockwise direction and scaled by two units in x-direction and two units in

y-direction .

7. What are the basic transformations?

8. What is the need of homogeneous coordinate system.

9. Explain how transformations are useful in computer graphics.

10. What are composite transformations and how it is useful?

11. Differentiate a view port from a window.

12. Discuss about window to view port mapping , given a window

(10,20,10,20) and a view port (60,100,50,80) .Obtain viewing

transformation matrix.

13. Write about normalization transformation.

14. Explain point clipping and line clipping.

15. Define the terms image spaces and object space.

16. Explain how Sutherland line clipping algorithm is applied for polygon

clipping .

17. Compare Sutherland line clipping algorithm and midpoint subdivision line

clipping algorithm.

18. What are the drawbacks in midpoint subdivision line clipping algorithm.

19. Write short notes on generalized clipping.

20. Explain the use of multiple windowing.

21. What do you mean by interior and exterior clipping?

22. Explain how Sutherland code is constructed for given screen coordinates.

150

Page 151: CG3

Unit-3

1. Bring out major differences between 2D and 3D graphics.

2. Briefly explain all the viewing parameters while displaying a 3D object on a 2D

screen.

3. Mention the importance of realism in 3D graphics with reference to some

applications.

4. Write about 3D coordinate system.

5. Write transformation matrices for 3D rotation.

6. Explain 3D clipping algorithm.

7. Explain viewing parameters.

8. Explain perspective projection.

9. Explain midpoint subdivision algorithm for 3D clipping.

10. Explain parallel projection and derive transformation matrix.

11. Differentiate parallel projection and perspective projection.

12. Write short notes on one point, two point and three point perspective

projections.

13. Explain a cube in cavalier and cabinet projection.

14. What are the drawbacks in Z-buffer algorithm and what are the limitations of

Z-buffer algorithm.

15. What are the advantages of Warnock algorithm?

16. Describe how a hidden line is eliminated in wire frame modeling.

17. Describe any two hidden surface removal algorithms.

18. Describe painters algorithm and Z-buffer algorithm.

19. Explain Warnock algorithm.

20. What is the necessity of removing hidden surface in pictures.

21. What is Gouraud shading and what are the calculations needed for rendering

an object.

22. What are the different shading methods.

23. What are the problems involved with interpolated shading models.

24. What is need of shading model.

25. Explain Lambert’s cosine law.

151

Page 152: CG3

Unit-4

1. What are the different approaches to generate curves?

2. Why cubic polynomials.

3. What are Bezier curves and explain its properties.

4. What are B-spine curves and write its properties.

5. Explain the procedure for drawing B-spline curves.

6. Explain problems in true curve generation algorithm.

7. what is the need of cubic curve

8. give the various methods of specifying spline curve

9. write short notes on the following

(a) Hermite curves (b) B- spline curves (c) Bezier Curves

10. Determine the Blending functions for uniform

11. Explain parametric Bi cubic surfaces

12. Explain the Half Toning Technique

13. write about chromaticity diagram

14. Explain what are color models

15. Explain hard ware oriented color models

16. Explain the procedure to convert RGB color model to HLS

17. Explain about color reproduction

18. What are the terms for describing color.

19. Derive expression for converting HLS color values to RGB color values

20. Explain Trimiscous color theory

UNIT-5152

Page 153: CG3

1. What is rendering and what are its applications?

2. Define shadows and write its applications.

3. Define transparency and explain types of transparency.

4. Explain self-shadows and projected shadows.

5. Explain shadow algorithms.

6. Define texture.

7. Explain Transparency and its types.

8. What is a shadow? And what are its applications.

9. Explain scan line shadow algorithm.

10. What are different types of shadows?

11. Explain shadow volumes.

12. What are textures?

13. Explain texture-mapping techniques.

14. How texture mapping is used to modify or replace different material

properties.

15. What is animation and what are different types of animation?

16. Explain different applications of animation.

17. What are the languages for describing animation?

18. What are the different methods of controlling animation?

19. Explain what the problems in animation.

20. What is Squash and Stretch?

21. How animation is used in films. Explain briefly.

22. Explain what the rules in animation are.

23. What is temporal aliasing?

24. Explain the term “Kinematics of controlling animation”.

25. Design a storyboard layout and accompanying key frames for an animation of

a single polyhedron.

26. Write a morphing program to transform a sphere into a specified polyhedron.

153