Top Banner
Q. 1. What do you mean by memory hierarchy ? Briefly discuss. Ans. Memory is technically any form of electronic storage. Personal computer system have a hierarchical memory structure consisting of auxiliary memory (disks), main memory (DRAM) and cache memory (SRAM). A design objective of computer system architects is to have the memory hierarchy work as through it were entirely comprised of the fastest memory type in the system. Q. 2. What is Cache memory? Ans. Cache memory: Active portion of program and data are stored in a fast small memory, the average memory access time can be reduced, thus reducing the execution time of the program. Such a fast small memory is referred to as cache memory. It is placed between the CPU and main memory as shown in figure.
49

CAO Assignment

Dec 30, 2015

Download

Documents

Abhishek Dixit

Computer Architecture Importent Question and Answer
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CAO Assignment

Q. 1. What do you mean by memory hierarchy ? Briefly discuss.

Ans. Memory is technically any form of electronic storage. Personal computer system have a hierarchical memory structure consisting of auxiliary memory (disks), main memory (DRAM) and cache memory (SRAM). A design objective of computer system architects is to have the memory hierarchy work as through it were entirely comprised of the fastest memory type in the system.

        

Q. 2. What is Cache memory?  

Ans. Cache memory: Active portion of program and data are stored in a fast small memory, the average memory access time can be reduced, thus reducing the execution time of the program. Such a fast small memory is referred to as cache memory. It is placed between the CPU and main memory as shown in figure.

Q. 3. What do you mean by interleaved memory?

Page 2: CAO Assignment

Ans. The memory is partitioned into a number of modules connected to a common memory address and data buses. A primary module is a memory array together with its own addressed data registers. Figure shows a memory unit with four modules.

                        

Q. 4. How many memory chips of4128x8) are needed to provide memory capacity of 40 x 16.

Ans. Memory capacity is 4096 x 16

Each chip is 128 8

No. of chips which is 128 x 8 of 4096 x 16 memory capacity

        

Q 5.Explain about main memory. 

Page 3: CAO Assignment

Ans. RAM is used as main memory or primary memory in the computer. This memory is mainly used by CPU so it is formed as primary memory RAM is also referred as the primary memory of computer. RAM is volatile memory because its contents erased up after the electrical power is switched off. ROM also come under category of primary memory. ROM is non volatile memory. Its contents will be retained even after electrical power is switched off. ROM is read only memory and RAM is read-write memory.Primary memory is the high speed memory. It can be accessed immediately and randomly.

Q 6. What is meant by DMA?

Ans. DMA : The transfer of data between a fast storage device such as magnetic disk and memory is liputed by speed of CPU. Removing the CPU from the path and letting the peripheral device manager the memory buses directly would improve speed of transfer. This transfer technique is called Direct my Access (DMA) During DMA transfer, the CPU i idle áiiThhsno-eeRtmtofo buses. A DMA controller takes over the buses to manage the transfer y nteI/O device and memory

Q. 7. Write about DMA transfer.

Ans. The DMA controller is among the other components in a computer system. The CPU communicates with the DMA through the address and data buses with any interface unit. The DMA has its own address, which activates with Data selection and One the DMA receives the start control command, it can start the transfer between the peripheral device and CPU.

Page 4: CAO Assignment

Q. 8. Explain about Inter leave memory.

Ans. The memory is to be partitioned into a number of modules connected to a common memory address and data buses. A memory module is a memory array together with its own address and data register. Each memory array has its own address register and data register. The address registers receive information from a common address bus and data register communicate with a bidirectional data bus. The two least significant bits of the address can be used to distinguish between four modules. The modular system permits one module to initiate a memory access while with other modules are in the process of reading or writing a word and each module is honor a memory requestind endent of the state of the other modules.

Q. 9 Differentiate among direct mapping and associate mapping.

Ans. Direct mapping : The direct mapped cache is the simplest form of cache and easiest to check for a hit. There is only one possible place that any memory location can be cached, there is nothing to search. The line either contain the memory information it is looking for or it does not.

Associate mapping : Associate cache is content addressable memory. The cache memory does not have its address. Instead this memory is being accessed using its contents. Each line of cache memory will accommodate the address and the contents of the address from the main memory. Always the block of data is being transferred to cache memory instead of transferring the contents of single memory location from main

Page 5: CAO Assignment

Q 10. Define the terms: Seek time, Rotational Delay, Access time.

Ans .Seek time : Seek time is a time in which the drive can position its read/write ads over any particular data track. Seek time varies for different accesses in tie disk. It is preferred to measure as an average seek time. Seek time is always measured in milliseconds (ms).

Rotational Delay: All drives 1bve rotational delay. The time that elapses between the moment when the read/we heal\settles over the desired data track and the moment when the first byte of required data appears under the head.

 Access time: Access time is simply the sum of the seek time and rotational latency time.

Q 11. What do you mean by DMA channel?

Ans. DMA channel: DMA channel is issued to transfer data between main memory and peripheral device IriördeTto perform the transfer of data. The DMA controller accs rs address and data buses.

DMA with help of sthematic diagram of controller  ontile needs the sual circuits of and e to communicate with - CPZJ and I/O device. In addition, it nee s an address register; aword count register, and a set of,es The address register and address lines are used rec communication with memory to word count register specifies the no.of word that - must be trEia transfer may be done directly between the device and memory .

                

Page 6: CAO Assignment

Figure 2 shows the block diagram of typical DMA controlIer. The unit communicates with CPU via the data bus and control lines. The registers in the DMA are selected by

        

-through the address bus by enablin DS DMA RS (Register Select) mputse Icy iceaa) and rite inputs are bidirectional. When the G (Bus Grant)  fo CPU can communicate with DMA register through the data bus to read from or write to the DMA registers. When BC = 1, the CPU has buses and DMA can communicate directly with memory by specifying an address in address bus and activatingorWR.ctrLil. The DMA communicate with ex1Tipheral through request and acknowledge lines by using hands haking procedure.

1.The DMA controller has three registers : an address register, a word count register, and a control register. The address register cónfäi in address The address bits go through bus buffer into address bus The address register is incremented after word that transferred to memory. The word- co1fltTegister o14s the number of words to be transferred.This register is decremented by one after each word transfer and internally tested for zero. The control register specifies the mode of All registers m the DMA appears to the (PtJ as I/O interfäZ?

Page 7: CAO Assignment

egisters Thus this CPU can read from or write into DMA register undefjrggrai1contrpI Via the data bus.

Q. 13. RAM chip 4096 x 8 bits has tio enable lines. How many pins are needed for the iegrated circuits package? Draw a block diagram and label all input and outputs of the RAM. What is main feature of random access memory?

Ans.

(a) Total RAM capacity of 4096. Moreover the size of each RAM chip in 1024 x 8, that means total number of RAM chips required.

    4O96

    1024

That means total 4RAM chips are required of 1024 x 8 RAM.

Page 8: CAO Assignment

No. of address lines required to map each RAM chip of size 1024 x 8 is calculated as specified ahead.

2 = 1024; 2 = n = 10 that means 10 bit address is required to map each RAM chip of size 1024 x 8.

8-bit data bus is required for RAM because number after multiplication is 8 in RAM chip of size 1024 x 8.

10 bit address bus is required to map 1024 x 8RAM. The 11th and bit is used to select one of four RAM chips. Here we will take 12 bit of address bus because of 11th and 12th bit will select one of the four RAM chip as shown in memory address in Diagram. N0Q. The RAM 1C as described above is used in a microprocessor system, having 16b address line and 8 bit data line. It’s enable — 1 input is active when A15 and A14 bjjóre 0 and 1 and enable -2 input is active when A13, A12 bits are ‘X’ and ‘0’.

Q 14. What shall be the range of addresses that is being used by the RAM.

Ans. The RAM chip is better suited for communication with the CPU if it has one or more control inputs that selects the chip only when needed. Another there is bidirectional data bus that allows the transfer of data either from memory to CPU during a read operation, or from CPU to memory during a write operation. A bidirectional bus can be three-state buffer. A three-stats buffer output can be placed in one of three possible states a signal equivalent to logic , a signal equivalent to logic 0, or a high impedance state. The logic 1 and 0) are normal digital signals. The high impedance state behaves like an open circuit which means that the output does-not carry a signal and has no logic significance.

Page 9: CAO Assignment

        

The block diagram of a RAM chip is shows in figure. The capacity of memory is 216 work of 16 bit per word. This requires a 16-bit address and 8-bit bidirectional data bus.

It has A13 and A12 bits which 1 and 0, 0 and 0 then it is active to accept two input through chip select CSI and CS2.

If A15, A14 bits are 0 and I then one input is acceptable it is active i.e. it is from CS1 or CS2 (Chip selections).

General Functional table

Q 15.Design a CPU that meets the following specifications.

Ans .can access 64 words of memory, each word being 8-bit long. The CPU does this outputting a 6-bit address on its output pins A [5 0] and reading in the 8-bit value from memory on inputs D [7,...O]. It has one 8-bit accumulator, s-bit address register, 6-bit program counter, 2-bit instruction register, 8 bit data register.

Page 10: CAO Assignment

The CPU must realise the following instruction set:

Page 11: CAO Assignment

AC is Accumulator

MUX is Multiplexer

Here instruction register has two bits combination i.e.

Instruction Code Instruction Operation

00 ADD AC - AC + M[A]

Page 12: CAO Assignment

01 AND AC - AC A M[A]

10 JMP AC - M[A]

11 INC AC - AC + I

Q. 16. What are the advantages you got with virtual memory?

Ans permit the user to construct program as though a large memory space were available, equal to  totality auxiliary memory. Each address that is referenced by CPU goes through an address mapping from so called virtual address to physical address main memory. 

 There are following advantages we got with virtual memory:

1. Virtual memory helps in improving the processor utilization.

2. Memory allocation is also an important consideration in computer programming due to high cost of main memory.

3. The function of the memory management unit is therefore to translate virtual address to the physical address.

4. Virtual memory enables a program to execute on a computer with less main memory when it needs.

5.Virtual memory is generally implemented by demand paging concept In demand paging, pages are only loaded to main memory when they are required

6.Virtual memory that gives illusion to user that they have main memory equal to capacity of secondary stages media.

The virtual memory is concept of implementation which is transferring the data from secondary stage media to main memory as and when necessary. The data replaced from main memory is written back to secondary storage according to predetermined replacement algorithm. If the data swajd is designated a fixed size. This concept is called paging. If the data is in the main viiI1ze subroutines or matrices, it is called

Page 13: CAO Assignment

segmentation. Some operating systems combine segmentation and paging.

Q 17. Write about DMA transfer.

Ans .the CPU communicates with DMA through the address and data buses as with a interface unit. The DMA has its own address, which activates the DS and RS lin&jt’\he CPU initializes the DMA through the data bus. Once the DMA receives the start control command, it can start the transfer between peripheral device and the memory. When the peripheral device sends a DMA request, the DMA controller activates the BR line, informing the CPU to relinquish the buses. The CPU responds with its BC line, informing the DMA that its buses are disabled. The DMA then puts the current value of its address register with address bqs, initiates the RD WR signal, and sends a DMA acknowledge to the peripheral device. RD and WR lines in DMA controller are bidirectional. The direction of transfer depends on the status of BC line.

When BG = 0, the RD and WR are input lines allowing CPU to communicate with the internal DMA register when BC =1, the RD and WR are output lines from the DMA controller to random-access memory to specify the read or write operation for the data.

Page 14: CAO Assignment

When the peripheral device receives a DMA acknowledge, it puts a word in the data bus (for write) or receives a word from the data bus (for read). Thus the DMA controls the read or write operations and supplies. The address for the memory through the data bus for direct transfer between two units while CPU is momentarily disabled.

DMA transfer is very useful in many applications . It is used for fast transfer of information between magnetic disks and memory. It is also useful for updating the display in an interactive terminal. The contents of memory is transferred to the screen periodically by means of DMA transfer.

Q 18.What is memory organization ? Explain various memories ?

Ans .The memory unit is an essential component in any digital computer since it is needed for storing programs and data A very small computer with a limited application may be able to fulfill its intended task without the need of additional storage

Page 15: CAO Assignment

capacity, Most general purpose computer is run more efficiently if it is equipped with additional

storage beyond the capacity of main memory. There is just not enough in one memory unit to accommodate all the programs used in typical compui Mbst computei- users accumulate and continue to accumulate large amounts of data processing software.

There, it is more economical to use low cost storage devices to serve as a backup for storing. The information that is not currently used by CPU. The unit that communicates directly with CPU is called the mam memory Devices that provide backup Iore d&il’d auary memory The most common auxj1aii memory device used auxilor system are magnetic disks and tapes. They are used for storing system programs, large data files, and other backup information. Only proapisnd data currently needed by the processor reside in mam memory All other information is stored in auxiliary memory and transferred to main memory when needed.

There are following types of Memories:

1. Main memory

* RAM (Random - Access Memory)

* ROM (Read only Memory)

2. Auxiliary Memory

* Magnetic Disks -

* Magnetic tapes etc.

1. Main Memory : The main memory is central storage unit in computer system. It is used to store programs and data during computer operation. The technology for main memory is based on semi conductor integrated circuit.

RAM (Random Access Memory) : Integrated circuit RAM chips are available in two possible operating modes, static and dynamic. The static RAM consists of internal flip flops that store the binary information. The dynamic RAM stores binary information in form of electric charges that are applied to capacitors.

Page 16: CAO Assignment

ROM: Most of the main memory in general purpose computer is made up of RAM integrated chips, but a portion of the memory may be constructed with ROM chips. Rom is also random access. It is used for string programs that are permanently in computer and for tables of constants that do not change in value once the production of computer is completed.

2. Auxiliary Memory : The most common auxiliary memory devices used in computer systems are magnetic disks and magnetic ‘tapes. Other components used, but not as frequently, are magnetic drums, magnetic bubble memory, and optical disks. Auxiliary memory devices must have a knowledge of magnetic, jctranics and electro mechanical systems There are followmg auxiliary memories.

Magnetic Disk: A magnetic .Disk is a circular plate constructed of metal or plastic coated with magnetized material. Both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each surface. Bits are stored in magnetised surface in spots along concentric circles called tracks. Tracks are commonly divided into sections called sectors. Disk that are permanently attached and cannot removed by occasional user are called Hard disk. A disk drive with removable disks are called a floppy disks.

Magnetic tapes: A magnetic tape transport consists of electric, mechanical and electronic components to provide the parts and control mechanism for a magnetic tape unit. The tape itself is a strip of plastic coated with a magnetic recording medium. Bits are recorded as magnetic spots on tape along several tracks. Seven or Nine bits are recorded to form a character together with a parity bit R/W heads are mounted one”in each track so that data can be recorded and read as a sequence of characters.

Q. 19. Compare interrupt I/O with DMA T/O?

Page 17: CAO Assignment

Ans. There is following given comparison between interrupt I/O with DMA I/O.

Q 20. What is memory interleaving ? How is it different from cache memory?

Ans. Memory is to be partitioned into a number of modules connected to a common memory address and data buses A memory modules is memory array together with its own address and data registers. Figure show a memory unit with four modules. Each memory array has its own address register AR and data register DR. The ssreisters receive information from a common address bus and the data registers communicate with a bidirectional data bus The two least significant bits of the address can be used to distinguish between four iodiii The fid teinjtsiEidu1e to initiate a memory accesswle cLmQdlls are in process of reading or writing awo and each module is honor a memory request independent of the state of other modules.

Page 18: CAO Assignment

The advantage of a modular memory is that it allows the use of a technique called interleaving. In an inter created memory ,different sets of addresses are assigned to different memory modules. For example, in a two-module memory system the even addresses may be one module and the odd addresses in other. When the number of modules is a power of 2, the least significant bits of the address select a memory module and remaining hIs designale the specific lncpti—te—b—accss within the selected module,

A modular memory is useful in systems with pipeline and vector processing. A vector processor that uses an n- y interleaved memorycaric oerandsom n different modules. By staggering the memory access, ?ifective memory cycle time can he reduced by a factor close to the number of modulus. A CPU with instruction pipeline can take advantage of multiple memory modules so that each segment in the pipeline can access memory independent of memory access from other segments. Cache memory is different from memory interleavmg.

Processor accesses the main memory for the data. That speed of central processes are about 50 times faster than main memory. Hence processor cannot be utilized to its efficiency. The solution to this problem is to use a cache memory between the central processor and the main memory. Cache memory can provide the data to the CPU at the faster rate as compared to main memory. It is pronounced cashes, a special high-

Page 19: CAO Assignment

speed storage mechanism. It can he either a reserved section of main memory or an independent high speed memory.

A cache memory is totally different from interleave memory because its talks about its data and address bus but in cache memory, it sits between central processor and the main memory. During any particular memory cycle, the cache is being checked for memory address being issued by the processor. If the requested data is available in cache, this is called a cache bit. If the requested data is not available in cache, this is called a cache miss. Cache replacement policy will determine the data that will go out of cache to accommodate the new coming data.

Hit ratio is defined as number of hits in cache to total number of attempt made by CPU. A hit ratio has the value range from 0 to 1. Higher is the hit ratio, better is the performance of system.

Suppose access time in cache is easy to find out as compared to memory inter leave access time. There is simple i.e. access time of main memory 100 ns, access time of cache memory is I Ons and hit ratio is 0.8. The average access time of CPU (8 x 10 + 2 x (100 + 10)/lO) = 30 ns. Since hit ratio is 0.8 that means CPU will get the data 8 times in cache memory out of 10 attempts and in remaining 2 attempts CPU have to access the data from the main memory.

The size of cache memory is up to 512 MB and size of main memory is up to 512 MB in current generator computers. So why not we choose the size of cache equal to the main memory. Is that case no main memory is required. That would work but it would hnincredibly expensive. The idea behind caching is to use a small amount of expensive peed up a large arnou. J siowei, xpensve mey.

Q.21.What do you mean by initialization of DMA controller ? How DMA Controller works? Explain with suitable block diagram ?

Page 20: CAO Assignment

Ans. The DMA controller needs the usual circuits of an interface to communicate With CPU and I/O device. In addition, it needs an address register, a word count register, and a set of address line. The address register and address line are used for direct communication with the memory. The word count register specifies the number of words that must be transferred. The data transfer may be done directly between the device an memory under control of DMA,

Figure 2 shows the block diagram of a typical DIA controller. The unit communicate with CPU via datet,bus and control lines. The registers in DMA are selected by CPU through address bus by enabling PS (DMA select) and RS (register select) inputs. The RD (read) and WR (write) inputs are bidirectional When the BG (Bus grant) Input is 0, The CPU is in communicate with DMA registers through the data bus to read from or write to DMA registers. When BC 1, the CPU has the buses and DMA can communicate directly with the memory by specifying a address in the address bus and the activating the RD or WR control. The DMA communicate with the external peripheral through the request and acknowledge lines by using handshaking procedure.

 The DMA controller has three register : an address register, a word count register, and control register. The address register contains an address to specify the desired location in memory. The address bits go through bus buffers into the address bus. The address register is incremented after each word that is transferred to memory. The word count register holds the number of words to be transferred. The register is decremented by one after each word transfer and internally tested for zero. The control register specifies the mode of transfer. All register in DMA appear to CPU as I/O interface register. Thus the CPU can read from or

Page 21: CAO Assignment

write into DMA register under program control via the data  bus. transfer data be memory a per unit  transferred

Block diagram of DMA controller.

The initialization process is essentially a program consisting pf I/O instructions that include the address for selecting particular DMA registers. The CPU’ initializes the DMA by sending the following information through data bus:

1. The starting address of the memory lock here data liable (for read) or when the data are stored (for write).

2. The word cont, which is the number of words in memory block.

3. Control to specifically the modes of transfer such as reader Write.

4. A control to start the DMA transfer.

the starting address is stored in the  address register. the word count is stored in the word Triste hd the control information in the control register. When DMA is initialized, the CPU stops communicating with DMA unless it receives an interrupt signal or if it wants to check how many words have been transferred.

Page 22: CAO Assignment

Q. 22. When a DMA module takes control of bus and while it retain control of bus, what does the processor do.

Ans. The CPU communicates with the DMA through the address and data buses as with any interface unit. The DMA has its own address, which activates the 1)5 and RS lines. The CPU initializes the DMA through the data bus. Once the DMA receives the start control command, it can start the transfer between peripheral device and the memory. When the peripheral device sends a DMA request, the DMA controller activates the BR line, informing the CPU responds with its HG line, informing the DMA that its buses are disabled. The DMA then puts the current value of its address register into the address bus, initiate the RD or WR signal and sends a DMA acknowledge to the peripheral device. Note that RD and WR lines in DMA controller are bidirectional. The direction of transfer depends on the status of BG line. When BG 0, the RD and WR are input lines allowing the CPU to communicate with internal DMA registers. When BC = 1, the RD and WR are output lines from DMA controller to the random access memory to specify the read or write operation for the data. When the peripheral device receives a DMA acknowledge, it puts a word in the data bus (for write) or receives a word from the data bus (for read). Thus the DMA controls the read or write operations and supplies the address for the memory. The peripheral unit can then communicate with memory through the data bus for direct transfer between the two units while the CPU is momentarily disabled.

Page 23: CAO Assignment

For each word that is transferred, the DMA increments its address register and decrements its word count register. If the word count does not reach zero, the DMA checks the request line coming from peripheral. For a high speed device, the line will be active as soon as the previous transfer is completed. A second transfer is then initiated, and the process continues until the entire block is transferred. If the peripheral speed is slower, the DMA request line may come somewhat late. In this case the DMA disables the bus request line so that the CPU can continue.to execute its program, when the peripheral requests a transfer, DMA requests the buses again.

 If the word count register reaches zero, the DMA stops any further transfer and removes its bus request. It also informs the CPU of the termination by means of interrupts when the CPU responds to interrupts, it reads the content of word count register. The zero value of this register indicates that all the words were transferred successfully. The CPU can read this register at any time to check the number of words already transferred.

A DMA controller may have more than one channel. In this case, each channel has a request and acknowledge pair of control signals which are connected to separate peripheral devices. Each channel also has its own address register and word count

Page 24: CAO Assignment

register within the DMA controller. A priority among the channels may be established so that channels with high priority are serviced before channels with lower priority.

DMA transfer is very useful in many application. It is used for fast transfer of information between magnetic disks and memory. It is also useful for updating the display in an interactive terminal. The contents of memory can be transferred to the screen by means of DMA transfer.

Q. 23. (a) How many 128 x 8 RAM chips are needed to provide a memory capacity of 2048 bytes?

(b) How many lines of the address bus must be used to access 2048 bytes of memory ? How many these lines will be common to all chips?

(c) How many lines must be decoded for chip select ? Specify the size of recorder. 2048

Q. 24. A computer uses RAM chips of 1024 x 1 capacity.

(a) How many chips are needed, and how should there address lines be connected to provide a memory capacity of 1024 bytes?

(b) How many chips are needed to provide a memory capacity of 16K bytes? Explain in words how the chips are

Page 25: CAO Assignment

to be connected to the address bus ? Specify the size of the decoders. 1024

.

Q. 26. An 8-bit computer has a 16-bit address bus. The first 15 lines of address are used to select a bank of 32K bytes of memory. The higher order bit of address is used to select a register which receives the contents of the data bus ?

Explain how this configuration can be used to extend the memory capacity of system to eight banks of 32 K bytes each, for a total of 256 bytes of memory.

Page 26: CAO Assignment

Ans. The processor selects the external register with an address 8000 hexadecimal.

 Each bank of 32K bytes are selected by address 0000—7 FFF. The processor loads an 8-bit number into the register with a single I and seven 0’s. Each output of register selects one of the 8 banks of 32 bytes through 2-chip select input. A memory bank can be changed by changing in number in the register.

Q. 27. A Hard disk with 5 platters has 2048 tracks/ platter, 1024 sector/track (fixed number of sector per track) and 512 byte sectors. What is its total capacity?

Ans. 512 bytes x 1024 sectors 0.5 MB/track. Multiplying by 2048 tracks/platter gives 1GB/plat platter, or 5GB capacity in the driver. (in the problem, we use) the standard computer architecture definition of MB 220 bytes and GB 230 bytes, many hard disk manufactures use MB = 1,000,000 bytes and GB = 1,000,000,000 bytes. These definitions are close, but not equivalent.

Q. 28. A manufactures wishes to design a hard disk with a capacity of 30 GB or more (using the standard definition of 1GB = 230 bytes). If the technology used to manufacture the disks allows 1024-bytes sectors,.. 2048 sector/track, and 40% tracks/ platter, how many platter are required?

Ans. Multiplying bytes per sector times sectors per tracks per platter gives a capacity of 8 GB (8 x 230) per platter. Therefore, 4 platter will he required to give a total capacity of   30GB.

Page 27: CAO Assignment

Q. 29. If a disk spins at 10,000 rpm vhat is the average rational latency time of a request? If a given track on the disk has 1024 sectors, what is the transfer time for a sector?

Ans. At 10,000 r/min, it takes 6ms for a complete rotation of the disk. On average, the read/write head will have to wait for half rotation before the needed sector reaches it, SC) the average rotational latency will be 3ms. Since there are 1024 sectors on the track, the transfer time will he equal to the rotation time of the disk divided by 1024, or approximately 6 microseconds.

Q. 30. In a cache with 64-byte cache lines how may bits are used to determine which byte within a cache line an address points to ?

Ans. The 26 = 64, so the low 6 hits of address determine an address’s byte within a cache line.

Q. 31. In a cache with 64 byte lines, what is the address of the first word in the cache line containing the address BEF.3DEi40 H ?

Ans. Cache lines are aligned on a multiple of their size, so the address of first word in a line can found by setting all of the hits that determine the byte within the line to 0. In this case, 6 bits are used to select a byte within the line, SO We can find the starting address address of line by setting the low b b,ts of the address to 0. giving 13FF 3DE 40 H as the address of the first word in the line,

Page 28: CAO Assignment

Q. 32. In a cache with 128-byte cache lines, what is the address of the first word in the cache line containing the addresses.

(a) A23847FF4 (b) 724Ti824H (c) IFFABCIJ24.

Ans, For 1 28-byte catie lines, the low 7 hits of address indicate which byte within the line an address refers to. Since lines, are aligned, the address of first word in line can be found by setting the bits of the address that determine the byte within the line to 0. Therefore, the addresses of the first byte in lines containing the above addresses are as follows

(a) A2384780F1 (b) 7245E800H (c) EEFABC8OH.

Q. 33 For a cache with a capacity of 32 KB, How many lines does the cache hold for line lengths of 32, 64 or 128 bytes?

Ans. The number of lines in cache is simply the capacity divided by the line length, so the cache has 1024 lines with 32-byte lines, 512 lines with 64-byte lines, and 256 lines with 128 byte lines.

Q. 34. If a cache has a capacity of 16KB and a line length of 128 bytes, how many sets does the cache have if it is 2-way, 4-way, or 8-way set associative?

Ans. With 128-byte lines, the cache contains a total of 128 lines. The number of sets in the cache is the number of lines divided by the associativity so cache has 64 sets if it is 2-way set

Page 29: CAO Assignment

association, 32 sets if 4-way set associative, and 16 set if 8-way set-associative.

Q. 35. If a cache memory has a hit rate of 75 percent, memory request take l2ns to complete if they hit in the cache and memory request that miss in the cache take 100 ns to complete, what is the average access time of cache?

Ans. Using the formula, the average access time = (THjt X H1t) + (TmissX miss)

The average access time is (12 ns x 0.75) + (100 ns x 0.25) = 34 ns.

Q. 36. In a two-level memory hierarchy, if the cache has an access time of ns and main memory has an access time of 60ns, what is the hit rate in cache required to give an average access time of 10ns?

Ans. Using the formula, the average access time = (THjt X 1Hit) + (T5x ‘miss)

The average access time 10ns = (8ns x hit rate) + 60 ns x — (hit rate), (The hit and miss rates at a given level should sum to 100 percent). Solving for hit rate, we get required hit rate of 96.2%.

Q. 37. A two-level memory system has an average access time of l2ns. The top level (cache memory) of memory system has a hit rate of 90 percent and an access time of

Page 30: CAO Assignment

5ns. What is the access time of lower level (main memory) of the memory system.

Ans. Using the formula, the average access time = (“Flit x PEIIt) + miss)

The average access time = l2ris (5 x 0.9) + (Tmjss x 0.1).

Solving for Tmiss, we get Tmiss 75 ns,

Which is the access time of main memory.

Q. 38. If a cache has 64-byte cache lines, how long does it take to fetch a cache line if the main memory takes 20 cycles to respond to each memory request and return 2 bytes of data in response to each request?

Ans. Since the main memory returns 2 bytes of data in response to each request, 32 memory requests are required to fetch the line. At 20 cycles per request, fetching a cache line will take 640 cycles.

Q. 39. In direct-mapped cache with a capacity of 16KB and a line length of 32 bytes, how many bits are used to determine the byte that a memory operation references within a cache line, and how many bits are used to select the line in the cache that may contain the data?

Ans. 2 = 32, so 5 bits are required to determine which byte within a cache line is being referenced with 32-byte lines, there are 512 lines in 16KB cache, so, 9 bits are required to select the line that may contains the address (2 = 512).

Page 31: CAO Assignment

Q. 40. The logical address space in a computer system consists of 128 segments. 'Each segment can have up to 32 pages of 4K words in each physical memory consists of 4K blocks of 4K words in each. Formulate the logical and physical address formats.

Ans. Logical address:

                

Page 32: CAO Assignment

Q. 43. A memory system contains a cache, a main memory and a virtual memory. The access time of the cache is 5ns, and it has an 80 percent hit rate. The access time of the main memory is 100 ns, and it has a 99.5 percent hit rate. The access time of the virtual memory is 10 ms. What is average access time of the hierarchy.

Page 33: CAO Assignment

Ans. To solve this sort of problem, we start at the bottom of the hierarchy and work up. Since the hit rate of virtual memory is 100 percent, we can compute the average access time for requests that reach the main memory as (l00n s x 0.995) + (10 ns x 0.005)

= 50,099.5 ns. Give this, the average access time for requests that reach the cache (which is all request) is (5ns x 0.80) + (50,099.5 ns x 0.20) = 10,024 ns.

Q. 44. Why does increasing the capacity of cache tend to increase its hit rate?

Ans. Increasing the capacity Of cache allows more data to be stored in cache. If a program references more data than the capacity of a cache, increasing the cache’s capacity will increase the function of a program’s data that can be kept in the cache. This will usually increase the bit rate of the cache. If the program references less data than capacity of a cache, increasing the capacity of the cache generally does not affect the hit rate unless this change causes two or more lines that conflicted for space in the cache to not conflict since the program does not need the extra space.

Q. 45. Extend the memory system of 4096 bytes to 128 x 8 bytes of RAM and 512 x 8 bytes of ROM. List the memory address map and indicate what size decoder are needed if CPU address bus lines are 16 4096

Ans. Number RAM chips = 32

Therefore, 5 x 32 decoder are needed to select each of 32 chips. Also 128 = 2, First 7 lines are used as a address lines for a selected RAM  4096

Page 34: CAO Assignment

Number of ROM chips = 8.

Therefore, 3 x 8 decoders are needed to select each of 8 ROM chips. Also 512 = 2,

First 9 lines are used as a address line for a selected ROM. Since, 4096 = 212, therefore, There are 12 common address lines and I line to select between RAM and ROM. The memory address map is tabulated below

Q. 46. A computer employ RAM chips f 256 x 8 and ROM chips of 1024 x 8. The computer system needs 2k byte of RAM, 4K. bytes of ROM and four interface units, each with four register. A memory mapped 1/0 configuration is used. The two highest order bits of the address assigned 00 for RAM, 01 for ROM, and 10 for interface registers.

(a) How many RAM and ROM chips are rteded?

(b) Draw a memory-address map for the system.

Page 35: CAO Assignment

Q. 47. Which associativity techniques are used in 486 and Pentium Li and L2 caches?

Ans. The organization of the cache memory in the 486 and MMX Pentium family is called a four-way set associative cache, which means that the cache memory is split into four blocks. Each block also in organised as 128 or 256 lines of 16 bytes each. The following table shows the associativity of various processor IA and 1.2 caches.

Page 36: CAO Assignment

Q. 48. Design cache organization in 486 and Pentium processors.

Ans. The contents of cache must always be in sync with contents of main memory to ensure that the processor is working with current data. For this reason; The internal cache in 486 family is a write-through cache. Write-through means that when the processor write information out to cache, that information is automatically written through to main memory as well, 

 By comparison, the Pentium and later chips have an internal write-back cache, which means that both reads and writes are cached, Further improving performance. Even though the internal 486 cache is write-through, the system can employ an external write back cache for increased performance. In addition 486 can buffer upto 4 byte before actually storing the data in RAM, improving efficiency in case the memory bus is busy.

Another feature of improved cache design is that they are non-blocking. This is a technique for reducing or hiding memory delays by exploiting the overlap of processor operations with data accesses. A non-blocking cache enables program execution to proceed concurrently with cache misses as long as certain dependency constraints are observed. In other words, the cache can handle a cache miss much better and enable the processor to continue doing something non-dependent on the missing data.

The cache controller built into the processor also is responsible for watching the memory bus when alternative processors, known as bus masters, are in control of system. his process of watching the bus is referred to as bus snooping It a bus master device writes to an area of memory that also is stored in the processor cache currently, the cache contents and memory no longer agree. The cache controller then marks this data as invalid and reload the cache during the next memory access, preserving the integrity of the system.

Page 37: CAO Assignment

All PC processor designs that support cache memory include a features known as a translation look aside buffer (TLB) to improve recovery from cache misses. The TLB is a table inside the processor that stores information about the location of recently accessed memory addresses. The TLB speeds up the translation of virtual address to physical memory addresses.To improve TLB performance, several recent processors have increased the number of entries in TLB, as AMD did when it moved from the anthon. Thunder bird care to palomicro core. Pentium 4 processors that support HT technology have a separate instruction TLB (TLB) for each virtual processor thread.

As clock speeds increase, cycle time decreases, Newer system don’t use cache on the motherboard any longer because the faster DDR-SDRAM or RDRAM used in Modern Pentium -4/Celeron or Anthon systems can keep up with the mother board speed. Modern processor all integrate the L2 cache into the processor die just like the L1 cache. This enables the L2 to run at full core speed because it is now a part of the core. Cache speed is always more important than size. The rule is that a smaller but faster cache is always better than a slower but bigger cache. Table illustrates the need for and function of L1 (Internal) and L2 (External) caches in modern systems.

Result: As you can see, having two level of cache between very fast CPU and the much slower main memory helps minimize any wait states the processor might have to endure, especially those with the on-die L2. This enables the processor to keep working closer to its true speed.

Page 38: CAO Assignment

Q. 49. What is segmentation ? Explain in detail.

Ans. Segmentation is a techniques that involves having all programs code and data resident in RAM at run time. For given system, this limits the numbers of programs that can be run simultaneously. Segment sizes can differ from program to program, which means the operating system must employ considerable time dedicated to managing the memory system. The address generated by segmented program is called a logical address and space used by that segmented program is known logical space.

Segmented-page Mapping

The property of a logical space is that uses variable length segments. The length of each segment is allowed to grow and contract according to the needs of the program being executed. One way of specifying the length of a segment is by associating with it a number of equal -size pages.

Consider the logical address shown in figure. The logical address is partitioned into three fields.

1. The segment field specifies a segment number.

Page 39: CAO Assignment

2. The page field specifies the page within the segment.

3. The word field gives the specific word within page. A page field of K-bits can specify up to 2’ pages. A segment number may be associated with just one page or with as many as 2’ pages. A segment number may be associated with just one page or with as many as page. Thus the length of a segment would vary according to number of pages that are assigned to it. The mapping of the logical address into a physical address is done by means of two tables, shown in figure. The segment number of a logical address specifies the address for the segment table. The entry in the segment table is a pointer address for a jage table base. The page table base is added to the page number of given in logical address. The sum produces a pointer address to an entry in page table. The value found in page table provides the block number in physical memory. The concatenation of the block field with the word field produces the final physical mapped address.

        

The two mapping tables may be stored in two separate small memories or in main memory. In either case, a memory reference from CPU will require three accesses to memory First from the segment table, second from the page table and third from main memory. This would slow the system significantly when compared to a conversational system that requires only one reference to memory. To avoid this speed penalty, a fast associative memory is used to hold the most recently referenced table entries. (This type of memory is sometimes called a

Page 40: CAO Assignment

translation look side buffer, abbreviated TLB). The first time a given block is referenced, its value together with corresponding segment and page numbers are entered into the associated memory. Thus the mapping process is first attempted by associative search with the given segment and page numbers. If match occurs, time taken by search is only of the associative memory. If no match occurs, the segment page mapping is used and the result transformed into the associative memory for future reference.

For Example Suppose the size of secondary storage media is 16K and size of main memory is 1K. 14-bits of address bus are required to map the secondary storage media and 10-bits are required to map main memory. In that case CPU will generate 14-bit logical address. Suppose there are 5-bit given segment number. The 5-bit segment number specifies one of 32 possible segments. The 6-bit_page number can specify up to 64 pages. The remaining 3-bits (14 — (5 + 6) = 3) implies the 8 words in each page since the size of main memory is 1K. Each page contain 8 words. That means main memory can have IK/8 = 1024/8 = 128 page frame. The logical address generated by CPU is divided into there fields.

1. Segment field : It specifies the segment number. If there are n-bits in the segment number. That means 21L segments are there in secondary storage media. In our example, above there are 5-bits for segment. That means there are 2 32 segments are there in secondary storage media.

2. Page field : It specifies the page with in a particular segment. If there are n-bits in page fields. That means maximum of 2’ pages can be there in any segment. Although a particular segment may contain pages lesson than 2’. 2 In our Example, there are 6-bit for page fields. That means segment may have 26 = 64 maximum pages.

3. Word field: It specifies a word within a specified page. If there are n-bits in word field. That means words are there in each page. In our example there are 3-bits for word field. That means 2 = 8 words are there in each page.

Page 41: CAO Assignment

                        

Suppose the logical address generated by the CPU is 1403( )ctal). To map logical address to physical address, segment and page table are used, segment number in logical address of segment table. Segment table contains the address of page table base. The steps for mapping the logical address to physical address is described ahead.

1. The value in segment table at the address provided by segment number is added with page number given in logical address. In the logical address generated by CPU, 14 is segment number and the value at this segment address is 30. This value 30 is added with page number 03 to get page number 03 to get page table entry.

2. The page table entry corresponds to page frame number of main memory where the requested page of required segment in residing. Corresponding to address 33 in page table, page frame number is 2. That means the required page is lying in page frame numbers. The word address 6 from the logical address implies the 7 word in that page because the first word is having address 0 and last eight word is having address 7

        

Page 42: CAO Assignment

Q 50.What is virtual memory ? Explain the address mapping using page?

Ans.Virtual memory was developed to automate the movement of program code and data between main memory and secondary storage to give the appearance of a single large store.

1.This technique greatly simplified the programmer’s job, particularly when program cod and data exceeded the main memory’s size. The basic technology proverlrçadily adaptable to modern multi programming environments, which in addation to a “virtual” single level memory, also require support for large address spaces; process protection, address space organization and the execution to processes only partially residing in memory.

2. Consequently, virtual memory has become widely used, and most processors have hardware to support it.

3.Virtual memory is stored in a hard disk image the physical memory hold a small number of virtual pages in physical page frames. Our focus is on mechanism andintructures popular in today’s O,s and microprocessors, which geared toward demand paged virtual memory.

Address mapping using pages Virtual memory stores only the most often used portions of an address in main memory and retrieves other portion from a disk as needed. The virtual-memory space is divided into pages identified by virtual page numbers which are mapped to page frames, shown in figure. As figure shows the virtual memory space is divided into uniform virtual pages, each of which is identified by a virtual page number. The physical memory is divided into uniform page frames, each identified by a page frame number. The page frames are so named because they frame, or hold, a page’s data. At its simplest, then, virtual memory is a mapping of virtual page nuirthers to page frame numbers.

Page 43: CAO Assignment

The mapping is a function i.e. a given virtual page can have only one pitysical location. However, the inverse mapping-from page frame numbers to virtual page numbers-is not necessarily a function, and thus it is possible to have several pages mapped to the same page frame,

The table implementation of the address mapping is simplified if the information in address space and memory space are each divided into groups of fixed size. The physical memory is broken down into groups of equal size called blocks. The term page refers to groups of address space of the same size made by the programmer. The programs can also split into pages. Portions of program are moved from auxiliary memory to main memory in records equal to the size of a page. The term ‘Page frame’ is sometimes used to denote a block. A simple mapping a virtual and a physical memory as shown in figure.

Let us illustrate with an address space (virtual memory) of 8k and a memory space (physical memory) of 4k. If we split each into groups of 1k words, we get eight pages frames as shown in figure. At any given time, upto four pages of virtual memory may reside in main memory in any one of the four page frame.

Address Mapping Using Memory Mapping page table

The organization of the memory mapping is shown in next figure. The memory. page table in a paged system consists of eight

Page 44: CAO Assignment

words, one for each page. The address- in page table, specifies the page number and the contents of the word gives the frame number where the page is stored in main memory. The table shows the page 1, 3, 6 and 7 are now available in main memory in page from 1, 2, 0 and 3 respectively. A presence bit in each location indicates whether the page has been transferred frorn auxiliary memory into main memory. A ‘1’ in the presence bit indicates that this page is available in main memory and ‘0’ in the presence bit indicate that this page is not available in main memory.

The CPU reference a word in memory with a virtual address of 13-bits. The three higher order bits of virtual address specifies a page number and also an address for memory page table. The content of word in memory page taken at the page number ‘d dress is read out into memory table buffer register (MBR). if the presence bit is 1. The frame number thus read is transferred to the two high order bit of main memory address register. A read signal to main memory transfers the contents. The word to memory buffer register and it ready to be used by the CPU. If the presence bit is 0, it means the contents of word referenced by virtual address does not reitie in main memory. A call to the OS (Operation system) is then generated to fetch the required page from the auxiliary memory and transfers it into main memory before resuming computation.

Page 45: CAO Assignment