Top Banner
May 2012 Bachelor of Computer Application (BCA) – Semester 3 BC0046 – Microprocessor – 4 Credits (Book ID: B0807) Assignment Set – 1 (60 Marks) Submitted by: Deepak Kumar
53
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: bc0046

May 2012

Bachelor of Computer Application (BCA) – Semester 3

BC0046 – Microprocessor – 4 Credits

(Book ID: B0807)

Assignment Set – 1 (60 Marks)

Submitted by:

Deepak Kumar

Page 2: bc0046

Q1. Convert the decimal number 231.23 to octal and hexadecimal.

Answer:

231.23 ~= 11100111.001110 in binary.

To get the hexadecimal or octal representation, group the bits by three or four, aligned at the decimal point:231.23 ~= 11100111.001110 in binary11 100 111. 001 110 = 347.16(+-0.01) in octal1110 0111. 0011 10 = E7.38(+-0.04) in hexadecimal

Q2. Draw and explain the internal architecture of 8085.

System Bus

Typical system uses a number of busses, collection of wires, which transmit binary numbers, one bit per wire. A typical microprocessor communicates with memory and other devices (input and output) using three busses: Address Bus, Data Bus and Control Bus.

Page 3: bc0046

Fig 2.1: Internal Architecture of 8085

Address Bus

One wire for each bit, therefore 16 bits = 16 wires. Binary number carried alerts memory to ‘open’ the designated box. Data (binary) can then be put in or taken out. The Address Bus consists of 16 wires, therefore 16 bits. Its “width” is 16 bits. A 16 bit binary number allows 216 different numbers, or 32000 different numbers, ie 0000000000000000 up to 1111111111111111. Because memory consists of boxes, each with a unique address, the size of the address bus determines the size of memory, which can be used. To communicate with memory the microprocessor sends an address on the address bus, eg 0000000000000011 (3 in decimal), to the memory. The memory selects box number 3 for reading or writing data. Address bus is unidirectional, i.e numbers only sent from microprocessor to memory, not other way.

Data Bus

Page 4: bc0046

Data Bus: carries ‘data’, in binary form, between microprocessor and other external units, such as memory. Typical size is 8 or 16 bits. The Data Bus typically consists of 8 wires. Therefore, 28 combinations of binary digits. Data bus used to transmit “data”, ie information, results of arithmetic, etc, between memory and the microprocessor. Bus is bi-directional. Size of the data bus determines what arithmetic can be done. If only 8 bits wide then largest number is 11111111 (255 in decimal). Therefore, larger number have to be broken down into chunks of 255. This slows microprocessor. Data Bus also carries instructions from memory to the microprocessor. Size of the bus therefore limits the number of possible instructions to 256, each specified by a separate number.

Control Bus

Control Bus are various lines which have specific functions for coordinating and controlling microprocessor operations. Eg: Read/NotWrite line, single binary digit. controls whether memory is being ‘written to’ (data stored in mem) or ‘read from’ (data taken out of mem) 1 = Read, 0 = Write. May also include clock line(s) for timing/synchronising, ‘interrupts’, ‘reset’ etc. Typically microprocessor has 10 control lines. Cannot function correctly without these vital control signals.

Page 5: bc0046

Q3. Draw and explain the internal architecture of 8086.

Figure 3.1 shows the internal architecture of the 8086. Except for the instruction register, which is actually a 6-byte queue, the control unit and working registers are divided into three groups according to their functions. There is a data group, which is essentially the set of arithmetic registers; the pointer group, which includes base and index registers, but also contains the program counter and stack pointer; and the segment group, which is a set of special purpose base registers. All of the registers are 16 bits wide.

The data group consists of the AX, BX, CX and DX registers. These registers can be used to store both operands and results and each of them can be accessed as a whole, or the upper and lower bytes can be accessed separately. For example, either the 2 bytes in BX can be used together, or the upper byte BH or the lower byte BL can be used by itself by specifying BH or BL, respectively.

Fig 3.1: Internal Architecture of 8086

In addition to serving as arithmetic registers, the BX, CX and DX registers play special addressing, counting, and I/O roles:

BX may be used as a base register in address calculations.

CX is used as an implied counter by certain instructions.

Page 6: bc0046

DX is used to hold the I/O address during certain I/O operations.

The pointer and index group consists of the IP, SP, BP, SI, and DI registers. The instruction pointer IP and SP registers are essentially the program counter and stack pointer registers, but the complete instruction and stack addresses are formed by adding the contents of these registers to the contents of the code segment (CS) and stack segment (SS) registers. BP is a base register for accessing the stack and may be used with other registers and/or a displacement that is part of the instruction. The SI and DI registers are for indexing. Although they may be used by themselves, they are often used with the BX or BP registers and/or a displacement. Except for the IP, a pointer can be used to hold an operand, but must be accessed as a whole.

To provide flexible base addressing and indexing, a data address may be formed by adding together a combination of the BX or BP register contents, SI or DI register contents, and a displacement. The result of such an address computation is called an effective address (EA) or offset. The word “displacement” is used to indicate a quantity that is added to the contents ‘of a register(s) to form an EA. The final data address, however, is determined by the EA and the appropriate data segment (DS), extra segment (ES), or stack segment (SS) register.

The segment group consists of the CS, SS, DS, and ES registers. As indicated above, the registers that can be used for addressing, the BX, IP, SP, BP, SI, and DI registers, are only 16 bits wide and, therefore, an effective address has only 16 bits. On the other hand, the address put on the address bus, called the physical address, must contain 20 bits. The extra 4 bits are obtained by adding the effective address to the contents of one of the segment registers as shown in Fig. 3.2. The addition is carried out by appending four 0 bits to the right of the number in the segment register before the addition is made; thus a 20-bit result is produced. As an example, if (CS) = 1234 and (IP) = 0002, then the next instruction will be fetched from

0002 Effective address

+ 12340 beginning segment address

12342 Physical address of instruction

Page 7: bc0046

[It is standard notation for parentheses around an entity to mean "contents of," e.g., (IP) means the contents of IP. Also, all addresses are given in hexadecimal.]

Fig. 3.2 Generation of physical Address

The utilization of the segment registers essentially divide the memory space into overlapping segments, with each segment being 64K bytes long and beginning at a 16-byte, or paragraph, boundary, i.e., beginning at an address that is divisible by 16. We will hereafter refer to the contents of a segment register as the segment address, and the segment address multiplied by 16 as the beginning physical segment address, or simply, the beginning segment address. An illustration of the example above is given in Fig.3.3( a) and the overall segmentation of memory is shown in Fig. 3.3(b).

The advantages of using segment registers are that they:

1. Allow the memory capacity to be 1 MB even though the addresses associated with the individual instructions are only 16 bits wide.

2. Allows the instruction, data, or stack portion of a program to be more than 64K bytes long by using more than one code, data, or stack segment.

3. Facilitate the use of separate memory areas for a program, its data, and the stack.Permit a program and/or its data to be put into different areas of memory each time the program is executed.

Page 8: bc0046

Fig 3.3 Memory segmentation

The 8086’s PSW contains 16 bits, but 7 of them are not used. Each bit in the PSW is called a flag. The 8086 flags are divided into the conditional flags,which reflect the result of the previous operation involving the ALU, and the controlflags, which control the execution of special functions. The flags are summarized in Fig. 3.4.

The condition flags are:

SF (Sign FIag)-Is equal to the MSB of the result. Since in 2’s complement negative numbers have a 1 in the MSB and for nonnegative numbers this bit is 0, this flag indicates whether the previous result was negative’ or nonnegative.

ZF (Zero Flag)-Is set to 1 if the result is zero and 0 if the result is nonzero.

PF (Parity Flag)-Is set to 1 if the low-order 8 bits of the result contains an even number of 1’s; otherwise it is cleared.

CF (Carry Flag)-An addition causes this flag to be set if there is a carry out of the MSB, and a subtraction causes it to be set if a borrow is needed. Other instructions also affect this flag and its value will be discussed when these instructions are defined.

Page 9: bc0046

AF (Auxiliary Carry Flag)-Is set if there is a carry out of bit 3 during an addition or a borrow by bit 3 during a subtraction. This flag is used exclusively for BCD arithmetic.

OF (Overflow Flag)-Is set if an overflow occurs, i.e., a result is out of range. More specifically, for addition this flag is set when there is a carry into the MSB and no carry out of the MSB or vice versa. For subtraction, it is set when the MSB needs a borrow and there is no borrow from the MSB or vice versa.

As an example, if the previous instruction performed the addition

0010 0011 0100 1101

+ 0011 0010 0001 0001

0101 0101 0101 1110

then following the instruction:

SF=0 ZF=0 PF=0 CF=0 AF=0 OF=0

Fig 3.4: PSW register of 8086

If the previous instruction performed the addition

0101 0100 0011 1001

+ 0100 0101 0110 1010

1001 1001 1010 0011

then the flags would be:

SF = 1 ZF = 0 PF = 1 CF = 0 AF = 1 CF = 1

Page 10: bc0046

The control flags are:

DF (Direction Flag)-Used by string manipulation instructions. If zero, the string is processed from its beginning with the first element having the lowest address. Otherwise, the string is processed from the high address towards the low address.

IF (Interrupt Enable Flag)-Ifset, a certain type of interrupt (a maskable interrupt) can be recognized by the CPU; otherwise, these interrupts are ignored.

TF (Trap Flag)-Ifset, a trap is executed after each instruction.

Page 11: bc0046

Q4. Write a sequence of instructions to reverse a two digit hexadecimal number available in the register AX using shift and rotate instructions?

Fig 4.15: Shift and rotate instructions

The shift and rotate instructions for the 8086 are defined in Fig. 4.15. These instructions shift all of the bits in the operand to the left or right by the specified count, CNT. For the shift left instructions, zeroes are shifted into the right end of the operand and the MSBs are shifted out the left end and

Page 12: bc0046

lost, except that the least significant bit of these MSBs is retained in the CF flag. The shift right instructions similarly shift bits to the right; however, SAR (shift arithmetic right) does not automatically insert zeroes from the left; it extends the sign of the operand by repeatedly inserting the MSB. The rotate instructions differ from theshift instructions in that the operand is treated like a circle in which the bits shifted out of one end are shifted into the other end. The RCL and RCR instructions include the carry flag in the circle and the ROL and ROR do not, although the carry flag is affected in all cases.

Page 13: bc0046

Q5. Explain the concept of Linking and Relocation.

Linking and Relocation

In constructing a program some program modules may be put in the same sourcemodule and assembled together; others may be in different source modules and assembled separately. If they are assembled separately, then the main module, which has the first instruction to be executed, must be terminated by an END statement with the entry point specified, and each of the other modules must be terminated by an END statement with no operand. In any event, the resulting object modules, some of which may be grouped into libraries, must be linked together to form a load module before the program can be executed. In addition to outputting the load module, normally the linker prints a memory map that indicates where the linked object modules will be loaded into memory. After the load module has been created it is loaded into the memory of the computer by the loader and execution begins. Although the I/O can be performed by modules within the program, normally the I/O is done by I/O drivers that are part of the operating system. All that appears in the user’s program are references to the I/O drivers that cause the operating system to execute them.

The general process for creating and executing a program is illustrated in Fig 5.1. The process for a particular system may not correspond exactly to the one diagrammed in the figure, but the general concepts are the same. The arrowsindicate that corrections may be made after anyone of the major stages.

Page 14: bc0046

Fig. 5.1: Creation and Execution of a program

Segment Combination

In addition to the linker commands, the ASM-86 assembler provides a means ofregulating the way segments in different object modules are organized by the linker. Sometimes segments with the same name are concatenated and sometimes they are overlaid. Just how the segments with the same name are joined together is determined by modifiers attached to the SEGMENT directives. A SEGMENT directive may have the form

Segment name SEGMENT Combine-type

where the combine-type indicates how the segment is to be located within the load module. Segments that have different names cannot be combined and segments with the same name but no combine-type will cause a linker error. The possible combine-types are:

PUBLIC-If the segments in different object modules have the same name and the combine-type PUBLIC, then they are concatenated into a single segment in the load module. The ordering in the concatenation is specified by the linker command.

Page 15: bc0046

COMMON-If the segments in different object modules have the same name and the combine-type is COMMON, then they are overlaid so that they have the same beginning address. The length of the common segment is that of the longest segment being overlaid.

STACK-Ifsegments in different object modules have the same name and the combine-type STACK, then they become one segment whose length is the sum of the lengths of the individually specified segments. In effect, they are combined to form one large stack.

AT-The AT combine-type is followed by an expression that evaluates to a constant which is to be the segment address. It allows the user to specify the exact location of the segment in memory..

MEMORY-This combine-type causes the segment to be placed at the last of the load module. If more than one segment with the MEMORY combine type is being linked, only the first one will be treated as having the MEMORY combine-type; the others will be overlaid as if they had COMMON combine types.

Access to External Identifiers

Clearly, object modules that are being linked together must be able to refer to each other. That is, there must be a way for a module to reference at least some of the variables and/or labels in the other modules. If an identifier is defined in an object module, then it is said to be a local (or internal) identifier relative to the module, and if it is not defined in the module but is defined in one of the other modules being linked, then It is referred to as an external (or global) identifier relative to the module.

For single-object module programs all identifiers that are referenced must be locally defined or an assembler error will occur. For multiple-module programs, the assembler must be informed in advance of any externally defined identifiers that appear in a module so that it will not treat them as being undefined. Also, in order to permit other object modules to reference some of the identifiers in a given module, the given module must include a list of the identifiers to which it will allow access. Therefore, each module may contain two lists, one containing the external identifiers it references and one containing the locally defined identifiers that can be referred to by other modules. These two lists are implemented by the EXTRN and PUBLIC directives, which have the forms:

Page 16: bc0046

EXTRN Identifier:Type, . . . , Identifier:Type

and

PUBLIC Identifier, . . ., Identifier

where the identifiers are the variables and labels being declared as external or as being available to other modules. Because the assembler must know the type of all external identifiers before it can generate the proper machine code, a type specifier must be associated with each identifier in an EXTRN statement. For a variable the type may be BYTE, WORD, or DWORD and for a label it may be NEAR or FAR. In the statement

INC VAR1

if VAR1 is external and is associated with a word, then the module containing the statement must also contain a directive such as

EXTRN . . VAR1 :WORD.. . .

and the module in which VARl is defined must contain a statement of the form

PUBLIC .. ..VAR1….

One of the primary tasks of the linker is to verify that every identifier appearing in an EXTRN statement is matched by one in a PUBLIC statement. If this is not the case, then there will be an undefined external reference and a linker error will occur. Fig. 5.2 shows three modules and how the matching is done by the linker while joining them together.

Page 17: bc0046

Fig. 5.2: Illustration of the matching verified by the linker

As we have seen, there are two parts to every address, an offset and a segment address. The offsets for the local identifiers can be and are inserted by the assembler, but the offsets for the external identifiers and all segment addresses must be inserted by the linking process. The offsets associated with all external references can be assigned once all of the object modules have been found and their external symbol tables have been examined. The assignment of the segment addresses is called relocation and is done after the king process has determined exactly where each segment is to be put in memory.

Page 18: bc0046

Q6. Differentiate macros and procedures.

A macro is a group of repetitive instructions in a program which are codified only once and can be used as many times as necessary.

The main difference between a macro and a procedure is that in the macro the passage of parameters is possible and in the procedure it is not, this is only applicable for the MASM - there are other programming languages which do allow it. At the moment the macro is executed each parameter is substituted by the name or value specified at the time of the call.

We can say then that a procedure is an extension of a determined program, while the macro is a module with specific functions which can be used by different programs.

Another difference between a macro and a procedure is the way of calling each one, to call a procedure the use of a directive is required, on the other hand the call of macros is done as if it were an assembler instruction. Example of procedure:For example, if we want a routine which adds two bytes stored in AH and AL each one, and keep the addition in the BX register:

Adding Proc Near ; Declaration of the procedureMov Bx, 0 ; Content of the procedureMov B1, AhMov Ah, 00Add Bx, AxRet ; Return directiveAdd Endp ; End of procedure declaration

and an example of Macro:

Position MACRO Row, ColumnPUSH AXPUSH BXPUSH DX

Page 19: bc0046

MOV AH, 02HMOV DH, RowMOV DL, ColumnMOV BH, 0INT 10HPOP DXPOP BXPOP AXENDM

Page 20: bc0046

Q8. Describe about each flag of a 8086 flag register.

8086 CPU has 8 general purpose registers, each register has its own name:

AX - the accumulator register (divided into AH / AL):

1. Generates shortest machine code 2. Arithmetic, logic and data transfer 3. One number must be in AL or AX 4. Multiplication & Division 5. Input & Output

BX - the base address register (divided into BH / BL).

CX - the count register (divided into CH / CL):

1. Iterative code segments using the LOOP instruction 2. Repetitive operations on strings with the REP command 3. Count (in CL) of bits to shift and rotate

DX - the data register (divided into DH / DL):

1. DX:AX concatenated into 32-bit register for some MUL and DIV operations

2. Specifying ports in some IN and OUT operations

SI - source index register:

1. Can be used for pointer addressing of data 2. Used as source in some string processing instructions 3. Offset address relative to DS

DI - destination index register:

1. Can be used for pointer addressing of data 2. Used as destination in some string processing instructions 3. Offset address relative to ES

Page 21: bc0046

BP - base pointer:

1. Primarily used to access parameters passed via the stack 2. Offset address relative to SS

SP - stack pointer:

1. Always points to top item on the stack 2. Offset address relative to SS 3. Always points to word (byte at even address) 4. An empty stack will had SP = FFFEh

SEGMENT REGISTERS

CS - points at the segment containing the current program.

DS - generally points at segment where variables are defined.

ES - extra segment register, it's up to a coder to define its usage.

SS - points at the segment containing the stack.

Although it is possible to store any data in the segment registers, this is never a good idea. The segment registers have a very special purpose - pointing at accessible blocks of memory.

Segment registers work together with general purpose register to access any memory value. For example if we would like to access memory at the physical address 12345h (hexadecimal), we could set the DS = 1230h and SI = 0045h. This way we can access much more memory than with a single register, which is limited to 16 bit values.The CPU makes a calculation of the physical address by multiplying the segment register by 10h and adding the general purpose register to it (1230h * 10h + 45h = 12345h):

The address formed with 2 registers is called an effective address.

Page 22: bc0046

By default BX, SI and DI registers work with DS segment register;BP and SP work with SS segment register.Other general purpose registers cannot form an effective address.

Also, although BX can form an effective address, BH and BL cannot.

SPECIAL PURPOSE REGISTERS

IP - the instruction pointer:

1. Always points to next instruction to be executed 2. Offset address relative to CS

IP register always works together with CS segment register and it points to currently executing instruction.

FLAGS REGISTER

Flags Register - determines the current state of the processor. They are modified automatically by CPU after mathematical operations, this allows to determine the type of the result, and to determine conditions to transfer control to other parts of the program.Generally you cannot access these registers directly.

1. Carry Flag (CF) - this flag is set to 1 when there is an unsigned overflow. For example when you add bytes 255

Page 23: bc0046

+ 1 (result is not in range 0...255). When there is no overflow this flag is set to 0.

2. Parity Flag (PF) - this flag is set to 1 when there is even number of one bits in result, and to 0 when there is odd number of one bits.

3. Auxiliary Flag (AF) - set to 1 when there is an unsigned overflow for low nibble (4 bits).

4. Zero Flag (ZF) - set to 1 when result is zero. For non-zero result this flag is set to 0.

5. Sign Flag (SF) - set to 1 when result is negative. When result is positive it is set to 0. (This flag takes the value of the most significant bit.)

6. Trap Flag (TF) - Used for on-chip debugging.7. Interrupt enable Flag (IF) - when this flag is set to 1 CPU

reacts to interrupts from external devices.8. Direction Flag (DF) - this flag is used by some instructions

to process data chains, when this flag is set to 0 - the processing is done forward, when this flag is set to 1 the processing is done backward.

9. Overflow Flag (OF) - set to 1 when there is a signed overflow. For example, when you add bytes 100 + 50 (result is not in range -128...127).

Page 24: bc0046

Q9. Write an assembly program to add and display two numbers.

Program

MVI D, 8BH

MVI C, 6FH

MOV A, C

ADD D

OUT PORT1

HLT

Page 25: bc0046

BC0046 – MicroprocessorAssignment Set – 2

Q2. When working with strings, what are the advantages of the MOVS and CMPS instructions over the MOV and CMP instructions?

When working with strings, the advantages of the MOVS and CMPS instructions over the MOV and CMP instructions are:

1. They are only 1 byte long.

2. Both operands are memory operands.

3. Their auto-indexing obviates the need for separate incrementing or decrementing instructions, thus decreasing overall processing time.

As an example consider the problem of moving the contents of a block of memory to another area in memory. A solution that uses only the MOV instruction, which cannot perform a memory-to-memory transfer, is shown in Fig. 6.2(a).

Page 26: bc0046

Fig. 6.2: Program sequences for moving a block of data

A solution that employs the MOVS instruction is given in Fig. 6.2(b). Note that the second program sequence may move either bytes or words, depending on the type of STRING1 and STRING2.

Page 27: bc0046

Q3. Explain the working of DMA.

If the data transfer rate to or from an I/O device is relatively low, then the communication can be performed using either programmed or interrupt I/O. Some devices, such as magnetic tape and disk units and analog to-digital converters, may operate at data rates that are too high to be handled by byte or word transfers. Data rates for I/O and mass storage devices are often determined by the devices, not the CPU, and the computer must be capable of executing I/O according to the maximum speed of the device. For a disk unit the data rate is determined by the speed with which data pass under the read/write head and quite often this rate exceeds 200,000 bytes per second. Thus, there is less than 5 microseconds to transfer each byte to or from memory. For data rates of this magnitude, block transfers, which use DMA controllers to communicate directly with memory, are required.

During any given bus cycle, one of the system components connected to the system bus is given control of the bus. This component is said to be the master during that cycle and the component it is communicating with is said to be the slave. The CPU with its bus control logic is normally the master, but other specially designed components can gain control of the bus by sending a bus request to the CPU. After the current bus cycle is completed the CPU will return a bus grant signal and the component sending the request will become the master. Taking control of the bus for a bus cycle is called cycle stealing. Just like the bus control logic, a master must be capable of placing addresses on the address bus and directing the bus activity during a bus cycle. The components capable of becoming masters are processors (and their bus control logic) and DMA controllers. Sometimes a DMA controller is associated with a single interface, but they are often designed to accommodate more than one interface.

The 8086 receives bus requests through its HOLD pin and issues grants from its hold acknowledge (HLDA) pin. A request is made when a potential master sends a 1 to the HOLD pin. Normally, after the current bus cycle is complete the 8086 will respond by putting a 1 on the HLDA pin. When the requesting device receives this grant signal it becomes the master. It will remain master until it drops the signal to the HOLD pin, at which time the 8086 will drop the grant on the HLDA pin. One exception to the normal sequence is that if a word which begins at an odd address is being accessed, then two bus cycles are required to complete the transfer and a grant will not be issued until after the second bus cycle.

Page 28: bc0046
Page 29: bc0046

Q4. Write short notes on (i) programmed I/O and (ii) Interrupt I/O

Programmed IO

Programmed I/O consists of continually examining the status of an interface and performing an I/O operation with the interface when its status indicates that it has data to be input or its data-out buffer register is ready to receive data from the CPU. A typical programmed input operation is flowcharted in Fig.7.3.

As a more complete example, suppose that a line of characters is to be input from a terminal to an 82-byte array beginning at BUFFER until a carriage return is encountered or more than 80 characters are input. If a carriage return is not found in the first 81 characters then the message “BUFFER OVERFLOW” is to be output to the terminal; otherwise, a line feed is to be automatically appended to the carriage return

Interrupt IO

Even though programmed I/O is conceptually simple, it can waste a considerableamount of time while waiting for ready bits to become active. In the above example, if the person typing on the terminal could type 10 characters per second and only 10 μs is required for the computer to input each character, then approximately

99,990 x 100% = 99.99%100,000

of the time is not being utilized. This is all right if no other processing could be done while the input is taking place, but if other processing is unnecessarily delayed, then a different approach is needed.

Interruptis an event that causes the CPU to initiate a fixed sequence, known as an interruptsequence. Before an 8086 interrupt sequence can begin, the currently executing instruction must be completed unless the current instruction is a HLT or WAIT instruction.

Page 30: bc0046

For a prefixed instruction, because the prefix is considered as part of the instruction, the interrupt request is not recognized between the prefix and the instruction. In the case of the REP instruction, the interrupt request is recognized after the primitive operation following the REP is completed, and the return address is the location of the REP prefix. For MOV and POP instructions in which the destination is a segment register, an interrupt request is not recognized until after the instruction following the MOV or POP instruction is executed. Provided that the segment register is filled first, this allows the contents of both a segment register and a pointer to be changed without interruption. .

Page 31: bc0046

Q5. Explain about the semaphore operations.

Semaphore Operations

In multiprogramming systems, processes are allowed to share common software and data resources as well as hardware resources. In many situations, a common resource may be accessed and updated by only one process at a time and other processes must wait until the one currently using the shared resource is finished with it. A resource of this type, which is commonly referred to as a serially reusable resource, must be protected from being simultaneously accessed and modified by two or more processes. A serially reusable resource may be a hardware resource (such as a printer, card reader, or tape drive), a file of data, or a shared memory area.

For example, let us consider a personnel file that is shared by processes 1 and 2. Suppose that process 1 performs insertions, deletions, and changes, and process 2 puts the file in alphabetical order according to last names. If accessed sequentially, this file would either be updated by process 1 and then sorted by process 2, or vice versa. However, if both processes were allowed to access the file at the same time, the results would be unpredictable and almost certainly incorrect. The solution to this problem is to allow only one process at a time to enter its critical section of code, i.e., that section of code that accesses the serially reusable resource.

Preventing two or more processes from simultaneously entering their critical sections for accessing a shared resource is called mutual exclusion. One way to attain mutual exclusion is to use flags to indicate when the shared resource is already in use. To examine how this is done and some of the problems associated with this approach let us consider the possibility of using only one flag.

Page 32: bc0046

Q6. Explain what a virtual memory is?

Virtual Memory

The more sophisticated memory management scheme can be achieved by the hardware dynamic address translation design illustrated in Fig.8.6. For each memoryreference, the logical addressoutput by the CPU is translated into the physicaladdress, which is the address sent to the memory by the memory management hardware. Logical addresses are the addresses that are generated by the instructions according to the addressing modes.

Fig. 8.6: Dynamic Address Translation

Because the logical addresses may be different from the physical addresses, the user can design a program in a logical space, also called a virtual space, without consideration for the limitations imposed by the real memory. This provides twomajor advantages. First, dividing a program into several pieces with each mapped into an area in real memory, a program need not occupy a contiguous memory section. Therefore, memory fragmentation is reduced and less memory space is wasted. Second, it is not necessary to store the entire program in memory while it is executing. Whenever a portion of the program that is not currently in memory is referenced, the operating system can suspend the program, load in the required section of code, and then resume the program’s execution. This allows a program’s size to be larger than the actual memory available. In

Page 33: bc0046

other words, a user “virtually” has more memory to work with than actually exists.

With address translation hardware, the program is divided into segments according to the logical structure of the program and the resulting memory management scheme is called segmentation. When using segmentation, each logical address consists of two fields, the segment number and the offset within the segment. The number of bits representing the segment number governs the maximum number of segments allowed in a program, and the number of bits allocated to the offset specifies the maximum segment size. For example, if the segment number and offset have mand nbits, respectively, a program may have up to 2m segments, with each segment having a maximum size of 2n

bytes, thus providing each user with a virtual storage of 2m+n

bytes.

Fig. 8.7 illustrates the mapping of logical addresses into physical addresses.

Fig. 8.7: Segmentation Scheme

The segment number S in a logical address is used as an index into the segment table, which returns the beginning physical address X of the referenced segment. This address added to the offset L to form the physical address of the memory location. Because each job may be

Page 34: bc0046

assigned a separate area in the segment table, the base address of that section of the segment table that is associated with the currently executing job is stored in a register called the segment table register. The index S is made relative to the segment table register. When the system switches from one job to another, the segment table register is updated to point to a new section of the segment table. Because the address translation must be performed for every memory reference, either the entire segment table or that portion of the table containing the beginning addresses of the segments that are currently in use must be stored in registers which are part of the memory management hardware.

Each entry in the segment table is referred to as a segment descriptor. Depending on ,the particular implementation, a segment descriptor may include attributes in addition to the beginning segment address. The most common of these additional attributes are the.:

Status Field-Indicates whether or not the referenced segment is in the memory.

Segment Length Field-Indicates the size of the segment.

Protection Field-Provides protection against unauthorized reading, writing, or execution.

Reference Field-Provides useful information in determining which segment is to be replaced.

Change field : Indicates whether or not the segment has beenmodified since being brought into memory .

Page 35: bc0046

Q8. Explain the 8288 Bus controller.

For reasons of simplicity, flexibility, and low cost, most microcomputers, including those involving multiprocessor configurations, are built around a primary system bus which connects all of the major components in the system. In order to obtain a foundation while designing its products, a microcomputer manufacturer makes assumptions about the bus that is to be used to connect its devices together Frequently, these assumptions become formalized and constitute what is referred to as a bus standard

The Intel MULTIBUS has gained wide industrial acceptance and several manufacturers offer MULTIBUS-compatible modules. This bus is designed to support both 8-bit and 16-bit devices and can be used in multiprocessor systems in which several processors can be masters. At any point in time, only two devices may communicate with each other over the bus, one being the master and the other the slave. The master/slave relationship is dynamic with bus allocation being accomplished through the bus allocation (i.e., request/grant) control signals. The MULTIBUS has been physically implemented on an etched backplane board which is connected to each module using two edge connectors, denoted PI and P2, I shown in Fig. 9.13. The connector P1 consists of 86 pins which provide-the major bus signals, and P2 is an optional connector consisting of 60 auxiliary lines, primarily used for power failure detection and handling.

Fig. 9.13: Illustration of a module being plugged into MULTIBUS

Page 36: bc0046

The P1 lines can be divided into the following groups according to their functions:

1. Address lines.

2. Data lines.

3. Command and handshaking lines.

4. Bus access control lines.

5. Utility lines.

The MULTIBUS has 20 address lines, labeled through , where the numeric suffix represents the address bit in hexadecimal. The address lines are driven by the bus master to specify the memory location or I/O port being accessed.. The MULTIBUS standard calls for all single bytes to be communicated over only the lower 8 bits of the bus; therefore, any 16-bit interface must include a swap byte buffer so that only the lower data lines are used for all byte transfers. (It should be pointed out that because an

8086 expects a byte to be put on the high-order byte of the bus when is active, one may want to permit nonstandard MULTIBUS transfers between memory and an 8086.)

The two inhibit signals are provided for overlaying RAM, ROM, and auxiliary ROM in a common address space. For example, a bootstrap loader may be stored in an auxiliary ROM and a monitor in a ROM.

Because the loader is needed only after a reset, and could both be activated while the loader is executing.Then, when the monitor is in control,

could be raised while remains low. If control is passed to the user,

and could both be deactivated, thus allowing the RAM to fill the entire memory space during normal operation.

There are 16 bidirectional data lines ( - ), only eight of which are used in an 8-bit system. Data transfers on the MULTIBUS bus are accomplished by handshaking signals in a manner similar to that described in the

preceding sections. The memory read ( ), memory write ( ), I/O read

Page 37: bc0046

( ), and I/O write ( ) lines are defined to be the same as they were in

the discussion of the 8288 bus controller. There is an acknowledge ( ) signal which serves the same purpose as the READY signal in the discussion of the bus control logic, i.e., to verify the end of a transfer. In a general setting it may be received by bus master. Because a master must wait to be notified of the completion transfer, the duration of a bus cycle varies depending on the speed of the bus master and the slave. This asynchronous nature enables the system to handle slow devices without penalizing fast devices

Q9. Draw the block diagram of 8087.

The 8087 Numeric Data Processor

The 8087 numeric data processor (NDP) is specially designed to perform arithmeticoperations efficiently. It can operate on data of the integer, decimal, and real types,with lengths ranging from 2 to 10 bytes. The instruction set not only includes various forms of addition, subtraction, multiplication, and division, but also provides many useful functions, such as taking the square root, exponentiation, taking the tangent, and so on. As an example of its computing power, the 8087 can multiply two 64-bit real numbers in about 27 μs and calculate a square root in about 36 μs. If performed by the 8086 through emulation, the same operations would require approximately 2 ms and 20 ms, respectively. The 8087 provides a simple andeffective way to enhance the performance of an 8086 based system, particular when an application is primarily computational in nature.

A pin diagram of the 8087 is shown in Fig. 10.3.

Page 38: bc0046

Fig. 10.3: 8087 pin diagram

The address/data, status ready, reset, clock, ground, and power pins of the NDP have the same pin positions as those assigned to the 8086/8088. Among the remaining eight pins, four of them are not used. The other pins

are connected as follows: the BUSY pin to the host’s pin; the / pin

to the host’s / or / pin; the INT pin to the interruptmanagement logic (assuming the 8087 is enabled for interrupts); and the

/ could be connected to the bus request/grant pin of an independent processor such as an 8089. This simple interface allows an existing maximum mode 8086-based system to be easily upgraded by replacing the original CPU with a piggyback module, which consists of an 8086/8088 and an 8087.

Page 39: bc0046

Q10. Explain why the processor utilization rate can be improved in a multiprocessor system by an instruction queue.

Although the 8086 is a powerful single-chip microprocessors, their instruction set is not sufficient to effectively satisfy some complex applications. For such applications, the 8086 must be supplemented with coprocessors that extend the instruction set in directions that will allow the necessary special computations to be accomplished more efficiently. For example, the 8086 has no instructions for performing floating point arithmetic, but by using an Intel 8087 numeric data processor as a coprocessor, an application that heavily involves floating point calculations can be readily satisfied.

It will be seen that, except for the coprocessor itself, a coprocessor design does not require any extra logic other than that normally needed for a maximum mode system. Both the CPU and coprocessor execute their instructions from 1M same program, which is written in a superset of the 8086 instruction set. Other than possibly fetching an operand for the coprocessor, the CPU does not need to take any further action if the instruction is executed by the coprocessor.

The interaction between the CPU and the coprocessor when an instruction is executed by the coprocessor is depicted in Fig.10.1. An instruction to be executed by the coprocessor is indicated when an escape (ESC) instruction appears in the program sequence. Only the host CPU can fetch instructions, but the coprocessor also receives all instructions and monitors the instruction sequencing of the host. An ESC instruction contains an external operation code, indicating what the coprocessor is to do and is simultaneously decoded by both the coprocessor and the host. At this point the host may simply go on to the next instruction or it may fetch the first word of a memory operand for the coprocessor and then go on to the next instruction. If the CPU fetches the first word of an operand, the coprocessor will capture the data word and its 20-bit physical address.

Page 40: bc0046

Fig. 10.1: Synchronization between 8086 and its coprocessor

For a source operand that is longer than one word, the coprocessor can obtain the remaining words by stealing bus cycles. If the memory operand specified in the ESC instruction is a destination, the coprocessor ignores the data word fetched by the host and later the coprocessor will store the result into the captured address. In either case, the coprocessor will send a

busy (high) signal to the host’s pin and, as the host continues processing the instruction stream, the coprocessor will perform the operation indicated by the code in the ESC instruction. This parallel operation continues until the host needs the coprocessor to perform another operation or must have the results from the current operation. At

that time the host should execute a WAIT instruction and wait until its pin is activated by the coprocessor. The WAIT instruction repeatedly

checks the pin until it becomes activated and then executes the next instruction in sequence.

An ESC assembler language instruction has two operands. The first of two operands indicates the external opcode, which determines the action to be taken by the coprocessor. If the second operand specifies a memory location, then as explained above, the 8086 will fetch a word from this

Page 41: bc0046

location for the coprocessor and may pass the coprocessor an address for storing a result. If the second operand is a register, the register address is treated as part of the external op code and the CPU does nothing.

The interfacing of a coprocessor to a host CPU is shown in Fig. 10.2.Both the host and the coprocessor share the same clock generator and bus control logic. It is. possible to have two coprocessors connected to the same host CPU. When this is done, the coprocessors must be assigned distinct subsets of the set of external opcodes and each coprocessor must be able to recognize and execute the members of its subset. For the most part parallel lines can be used to connect the host to its coprocessors. For

two coprocessors, one could use the / pin on the 8086 and the other

could use the / pin. The two coprocessors would be connected to separate 8259A interrupt request pins.

In order for a coprocessor to determine when the host is executing an ESC

instruction, it must monitor the host CPU’s status on the - lines and the ADl5-AD0 lines for fetched instructions. Because instructions are prefetched by the CPU, an ESC instruction might not be executed immediately or, if it is preceded by a branch instruction, it might not be executed at all.

Page 42: bc0046

Fig. 10.2: Coprocessor configurarion

The coprocessor must track the instruction stream by monitoring the queue status bits QS1 and QS0 and maintaining an instruction queue identical to that of the host. If the queue’s status is 00, the coprocessor does nothing, but if it is 01, it will compare the five MSBs of the first byte in the queue to 11011. If there is a match, then an ESC instruction is ready to be executed and, assuming that the coprocessor recognizes the external opcode, it will perform the indicated operation; otherwise, this byte is ignored and is deleted from the queue. The queue status 10 indicates that the queue in

Page 43: bc0046

the host is being flushed and, therefore, causes the queue in the coprocessor to also be emptied. The 11 status combination indicates that the first byte in the queue is not the first byte of an instruction and this byte

is looked at by the coprocessor only / if it is known to be part of an ESC instruction. If not part of an ESC instruction, this byte is ignored. �

The coprocessor should be designed so that when an error occurs during the decoding or execution of an ESC instruction, it will send out an interrupt request (which is normally sent to an 8259A). The coprocessor should also be designed so that it can steal bus cycles by making bus requests through

one of the host’s pins when additional data must be read from or stored in memory. Last, the coprocessor must be able to apply a high signal to the

host’s pin while it is busy.