Top Banner
A The Zenith Institute of Computer Science Shikarpur Affiliated with S.B.T.E Karachi Presentation With reference to S.B.T.E Karachi For
47
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Information Technology

By

NA S I R AL I SH A H Vice Principal

“Only as high as I reach can I grow, only as far as I seek can I go, only as deep as I look can I see, only as much as I dream can I be.”

A

The Zenith Institute of Computer Science

ShikarpurAffiliated with S.B.T.E Karachi

Presentation

With reference to S.B.T.E Karachi

For

Diploma in Information Technology

Page 2: Information Technology

The Zenith Institute of Computer Science Shikarpur

1. Computer........................................................................................................................42. Characteristics of Computer..........................................................................................43. Hardware........................................................................................................................44. Types of Hardware ........................................................................................................5

a. Input Devices ....................................................................................................5b. Processing Devices ...........................................................................................6c. Output Devices...................................................................................................6d. Storage Devices.................................................................................................7e. Memory Devices ...............................................................................................7f. Communication Devices ...................................................................................8

5. Software ........................................................................................................................8a. System Software................................................................................................8b. Application Software ........................................................................................8

6. Information Technology ...............................................................................................97. Elements of Information Technology ...........................................................................9

a. Data Information...............................................................................................9b. Hardware and Software......................................................................................9c. Live ware...........................................................................................................9d. Procedures .........................................................................................................9

8. Communication Technology .........................................................................................9a. Simplex .............................................................................................................9b. Half-duplex .......................................................................................................9c. Full-duplex ........................................................................................................9

9. Computer Technology ..................................................................................................910. Technological Convergence ........................................................................................1011. Role Computer in Society ...........................................................................................1012. Role of Computer in Business ....................................................................................1013. Reasons to use Computer ............................................................................................1014. Brief History of Computer ..........................................................................................11

a. In the Beginning ..............................................................................................11b. Charles Babbage .............................................................................................11c. Use of Punched Cards by Hollerith ................................................................11d. Electronic Digital Computers .........................................................................12e. The Modern Stored Program EDC .................................................................12f. Advances in the 1950’s ...................................................................................13g. Advances in the 1960’s ...................................................................................13h. Most recent advances ......................................................................................14

15. Types of Computers ....................................................................................................14a. Analogue Computers ......................................................................................14b. Digital Computers ...........................................................................................14c. Hybrid Computers ...........................................................................................15

16. Classification of Digital Computers ............................................................................15a. Supercomputer ................................................................................................15b. Mainframe .......................................................................................................15c. Minicomputer ..................................................................................................15d. Workstation .....................................................................................................15e. Personal computer ...........................................................................................15f. Notebook computer .........................................................................................16g. Laptop computer .............................................................................................16h. Hand-held computer ........................................................................................16i. Palmtop ...........................................................................................................16j. PDA .................................................................................................................16

17. Number System–The definition ..................................................................................17a. Decimal System ..............................................................................................17b. Binary System .................................................................................................17c. Octal Number System .....................................................................................18d. Hexadecimal System .......................................................................................18e. Converting from Decimal System to Any Other ............................................18f. Converting from any other to Decimal ...........................................................19g. Converting Hexadecimal into Octal ................................................................19

18. System Software .........................................................................................................20

Introduction to I.T by Nasir Ali Shah 2 [email protected]

Page 3: Information Technology

The Zenith Institute of Computer Science Shikarpur

a. The operating system ......................................................................................20b. Network Operating system .............................................................................20c. Utility software ...............................................................................................20

19. Features Operating System .........................................................................................20a. Memory Management .....................................................................................20b. User Interface ..................................................................................................20c. Multiprocessing ...............................................................................................21d. Multitasking ....................................................................................................21e. Multiprogramming ..........................................................................................21

20. Operating system types ...............................................................................................21a. single task operating system ...........................................................................21b. Multi tasking operating systems .....................................................................21

21. Utility Programs ..........................................................................................................2222. What is Virus? .............................................................................................................22

a. Virus Types .....................................................................................................2223. Application Software ..................................................................................................2324. Different Types of Packages .......................................................................................23

a. Word Processing Software ..............................................................................23b. Database Software ..........................................................................................23c. Spreadsheet Software ......................................................................................23d. Multimedia Software ......................................................................................32e. Presentation Software .....................................................................................23f. Communication Software ...............................................................................23g. Custom Software .............................................................................................23

25. Shareware ....................................................................................................................2326. Freeware ......................................................................................................................2327. Public Domain ............................................................................................................2328. Copy Right ..................................................................................................................2429. Programming Languages ............................................................................................24

a. Low Level Programming Language ...............................................................24b. High Level Programming Language ...............................................................24

30. Translator ....................................................................................................................24a. Assembler .......................................................................................................24b. Compiler .........................................................................................................24c. Interpreter ........................................................................................................24

31. Generation of programming languages .......................................................................2532. Information system .....................................................................................................2633. Types of information system in current research ........................................................26

a. Document management system (DMS) ..........................................................26b. Content management system (CMS) ..............................................................26c. Library management system (LMS) ...............................................................26d. Records management system (RMS) ..............................................................26e. Digital imaging system (DIS) .........................................................................26f. Learning management system (LMS) .............................................................26g. Geographic information system (GIS) ............................................................26

34. Old version Information Systems ...............................................................................26a. Transaction Processing Systems .....................................................................27b. Management Information Systems .................................................................27c. Decision Support Systems ..............................................................................27

35. Organization of data ....................................................................................................2836. Data Storage Hierarchy ...............................................................................................2837. Overview of CAD/CAM .............................................................................................28

a. CAD ................................................................................................................28b. CAM ...............................................................................................................29

38. Virtual Reality .............................................................................................................2939. Expert system ..............................................................................................................2940. Robotics ......................................................................................................................29

Introduction to I.T by Nasir Ali Shah 3 [email protected]

Page 4: Information Technology

The Zenith Institute of Computer Science Shikarpur

Computer –The definition: A computer is an electronic device that operates under the control of a set of instructions that is stored in its memory unit. A computer accepts data from an input device and processes it into useful information which it displays on its output device. Actually, a computer is a collection of hardware and software components that help you accomplish many different tasks. Hardware consists of the computer itself, and any equipment connected to it. Software is the set of instructions that the computer follows in performing a task.

In other words we can also define that, a computer is a device that accepts input (in the form of digitalized data/instruction) and manipulates it for some result, based on a program or sequence of instructions that how the input is to be processed. Complex computers also include the means for storing data (including the programs and various types of files) for some necessary duration. A program may be invariable and built into the computer (and called logic circuitry). Different programs may be provided to the computer (loaded into its storage and then started by an administrator or user for multipurpose working).

Characteristics of computer:

Speed: Computers work at very high speed and are much faster than humans. A second is very large time period time for computer. A computer can perform billions of calculations in a second. The time used by a computer to perform an operation is called the processing speed. Computer speed is measured in Mega Hertz (MHz).

Storage: A computer can store a large amount of data permanently. User can use this data at any time. We can store any type of data in a computer. Text, graphic, pictures, audio and video files can be stored easily. The storage capacity of the computer is increasing rapidly.

Processing: A computer can process the given instructions. It can perform different types of processing like addition, subtraction, multiplication and division. It can also perform logical functions like comparing two numbers to decide which one is the bigger etc.

Accuracy: Accuracy means to provide results without any error. Computers can process large amount of data and generate error-free results. A modern computer performs millions of operations in one second without any error.

Communication: Most computers today have the capability of communicating with other computers. We can connect two or more computers by a communication device such as modem. These computers can share data, instructions, and information. The connected computer are called network.

Hardware:Your PC (Personal Computer) is a system, consisting of

many components. Some of those components, like Windows XP, and all your other programs, are software. The stuff you can actually see and touch, and would likely break if you threw it out a fifth-story window, is hardware.

Not everybody has exactly the same hardware. But those of you, who have a desktop system, like the example shown in Figure 1, probably have most of the components shown in that same figure. Those of you with notebook computers probably have most of the same components. Only in your case the components are all integrated into a single book-sized portable unit.

The system unit is the actual computer; everything else is called a peripheral device. Your computer's system unit probably has at least one floppy disk drive, and one CD or DVD drive, into which you can insert floppy disks and CDs. There's another disk drive, called the hard disk inside the system unit, as shown in Figure 2. You can't remove that disk, or even see it. But it's there. And everything that's currently "in your computer" is actually stored on that hard disk. (We know this because there is no place else inside the computer where you can store information!).

The floppy drive and CD drive are often referred to as drives with removable media or removable drives for short, because you can remove whatever disk is currently in the drive, and replace it with another. Your computer's hard disk can store as much information as tens of thousands of floppy disks, so don't worry about running out of space on your hard disk any time soon. As a rule, you

Introduction to I.T by Nasir Ali Shah 4 [email protected]

Figure 2

Figure 1

Page 5: Information Technology

The Zenith Institute of Computer Science Shikarpur

want to store everything you create or download on your hard disk. Use the floppy disks and CDs to send copies of files through the mail, or to make backup copies of important items.

Types of Hardware DevicesCountless hardware devices are available in the market, but these are classified into six most common categories.

1. Input Devices: - Word “input” means “enter anything” and in computer science term input is used for any data or instruction given by user to computer. Any peripheral device, used to give input to computer is referred as input device. Lots of input devices are introduced but some of these which are most commonly used are defined below:

a. Mouse: - a mouse (plural mouses, mice, or mouse devices.) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input.

Mouse is used to perform following operations

i. Point: To point to an item means to move the mouse pointer so that it's touching the item.

ii. Click: Point to the item, then click (press and release) the left mouse button.iii. Double-click: Point to the item, and click the left mouse button twice in rapid

succession - click-click as fast as you can.iv. Right-click: Point to the item, then click the mouse button on the right.v. Drag: Point to an item, then hold down the left mouse button as you move the

mouse. To drop the item, release the left mouse button.vi. Right-drag: Point to an item, then hold down the right mouse button as you

move the mouse. To drop the item, release the right mouse button

b. Keyboard: - a keyboard is an input device, partially modeled after the typewriter keyboard, which uses an arrangement of buttons or keys, to act as mechanical levers or electronic switches. A computer keyboard looks a lot like the keyboard for a typewriter but has added keys that provide many different features and enhancements. The keyboards sold today are called enhanced keyboards and have 101 to 106 individual keys. The layouts are basically the same with the odd key situated differently. Keys of keyboard are categorized in five types:

i. Function Keys: F1, F2, F3 … F12.

ii. Alpha Numeric Keys: An area containing, A to Z, 0 to 9 and symbolic keys.

iii. Numeric Keypad: An area at the right hand having keys 0 to 9 and arithmetic.

iv. Navigation Keys: Arrow Keys, Home, End, PgUp and PgDn.

v. Special Keys: All other keys, like Esc, Space bar, Ctrl, Alt, Backspace etc.

c. Scanner: - In computing, a scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.

d. Microphone: - A microphone (also called a mic or mike) is an acoustic-to-electric transducer or sensor that converts sound into an electrical signal. This device is commonly used to input sound into computer.

e. Digital Camera: - A digital camera is a camera that takes video or still photographs, or both, digitally by recording images via an electronic image sensor. Many compact digital still cameras can record sound and moving video as well as still photographs. Digital cameras can do things film cameras cannot: displaying images on a screen immediately after they are

Introduction to I.T by Nasir Ali Shah 5 [email protected]

Page 6: Information Technology

The Zenith Institute of Computer Science Shikarpur

recorded, storing thousands of images on a single small memory device, recording video with sound, and deleting images to free storage space.

2. Processing Device: - Word “Process” means “to act upon given input” and devices that process input are called processing devices. When we give input to computer, it acts as it is ordered. The processing device in a computer is accountable for storing and retrieving information. This information is processed using the Central Processing Unit (CPU). This part of the computer takes information that is input by the user and processes the information by calculating, comparing and copying it and is then saved to the computer's memory (RAM).

a. Processor: - A CPU is also called the processor and it is the main component of the computer. CPU is the brain of the computer and provides ability to computer for executing any given instruction and also tells the computer how to control the flow of instructions. In a computer system, all major calculations and comparisons are made inside the CPU and the CPU is also responsible for activating and controlling the operations of other units of a computer system. The quality of CPU inside a computer determines the quality of the computer. Basic Parts of a processor: A CPU has the following main parts:

i. ALU (Arithmetic and Logical unit): This unit of the CPU is capable of performing arithmetic and logical operations. This unit of the CPU gets data from the computer memory and perform arithmetic and logical operations on it.

ii. Registers: A processor has its own memory inside it in the shape of small cells. Each memory cell is called a "Register". Registers are used to carry data temporarily for performing operations. There are total 13 registers in a processor.

iii. Control Unit: This unit of the processor controls all the activities of the processor and also controls the input and output devices of the computer. It acts just like a police inspector who controls the traffic on a road. It obtains instructions from the program stored in main memory, interprets (translation of instructions into computer language) the instructions, and issues signals that cause other units of the computer to execute them.

b. Motherboard: - Actually motherboard is not processing device, but it plays a vital role in processing input. A motherboard is the central printed circuit board (PCB) in many modern computers and holds many of the crucial components of the system, while providing connectors for other peripherals. The motherboard is sometimes

alternatively known as the main board, system board. The motherboard contains the CPU, the BIOS ROM chip (Basic Input/Output System), and the CMOS Setup information. It has expansion slots for installing different adapter cards like your video card, sound card, Network Interface Card, and modem.

3. Output Devices: - Word “Output” means “Result”, after processing input computer gives result through output devices. Output devices can be defined as Any peripheral that receives and/or displays output from a computer. Below are some examples of different types of output devices commonly found on a computer.

a. Monitor: -    The computer monitor is an output device that is part of your computer's display system. A cable connects the monitor to a video adapter (video card) that is installed in an expansion slot on your computer’s motherboard. This system converts signals into text and pictures and displays them on a TV-like screen (the monitor). The computer sends a signal to the video adapter, telling it what character, image or graphic to display. The video adapter converts that signal to a set of instructions that tell the display device (monitor) how to draw the image on the screen. Two common types of monitor are:

i. CRT: - Short for Cathode-Ray Tube, CRT is the electron beams within a monitor that move across your screen either interlaced or non-interlaced hitting phosphor dots on the inside glass tube. The CRT monitor creates a picture out of many rows or lines of tiny colored dots. These are technically not the same thing as pixels, but the terms are often used interchangeably. The more lines of dots per inch, the higher and clearer the resolution. Therefore 1024 x 768 resolution will

Introduction to I.T by Nasir Ali Shah 6 [email protected]

Page 7: Information Technology

The Zenith Institute of Computer Science Shikarpur

be sharper than 800 x 600 resolution because the former uses more lines creating a denser, more detailed picture.

ii. Flate Panel Display or LCD: - Thin screen displays found with all portable computers and becoming the new standard with desktop computers. Instead of utilizing the cathode-ray tube technology flat-panel displays use Liquid-crystal display (LCD) technology or other alternative making them much lighter and thinner when compared with a traditional monitor.

b. Printer: - An external hardware device responsible for taking computer data and generating a hard copy of that data. Printers are one of the most used peripherals on computers and are commonly used to print text, images, and/or photos. Printers are in many types but most commonly used types are:

i. DOT matrix: - The term DOT matrix refers to the process of placing dots to form an image; the quality of the image being determined by the dots per inch. Dot matrix printer is a type of printer that uses print heads to shoot ink or strike an ink ribbon to place hundreds to thousands of little dots to form text and/or images. Today, dot matrix printers are not commonly used or found because of the low quality print outs when compared to ink jet printers or other later printer technologies.

ii. Inkjet printer: - A popular type of printer for home computer users that prints by spraying streams of quick-drying ink on paper. The ink is stored in disposable ink cartridges, often a separate cartridge is used for each of the major colors. These colors are usually Black, Red/Magenta, Green/Cyan, and Yellow (CMYK).  To the right is an example of an inkjet printer.

c. Speaker: - A hardware device connected to a computer's sound card that outputs sounds generated by the card. To the right is a picture example of the Altec Lansing VS2221 speakers with subwoofer.

d. Plotter: - plotter is a vector graphics printing device to print graphical plots, that connects to a computer. Computer hardware device similar to a printer that uses a pen, pencil, marker or other writing tool to make a design. Often these types of printers are used in schematics, CAD, and other print jobs.

e. Projector: - A hardware device that enables an image, such as a computer screen, to be projected onto a flat surface. These devices are commonly used in meetings and presentations as they allow for a large image to be shown so everyone in a room can see.

4. Storage Devices: - Storage devices in computer science are referred as “devices that store data permanently and can be read and written by end users”. In computer storage device plays a vital role, because all necessary instructions (Software) are stored in Storage devices. Many storage devices has been introduced but most commonly used devices are:

a. Hard Disk: - hard disk is part of a unit, often called a "disk drive," "hard drive," or "hard disk drive," that stores and provides relatively quick access to large amounts of data on an electromagnetically charged surface or set of surfaces. Today's computers typically come with a hard disk that contains several billion bytes (gigabytes) of storage.

b. Compact Disk: - A Compact Disc (also known as a CD) is an optical disc used to store digital data. It was originally developed to store sound recordings exclusively, but later it also allowed the preservation of other types of data. Audio CDs have been commercially available since October 1982. In 2010, they remain the standard physical storage medium for audio.

c. DVD: - DVD, also known as Digital Versatile Disc or Digital Video Disc, is an optical disc storage media format, and was developed and invented by Sony Panasonic Samsung in 1995. Its main uses are video and data storage. DVDs are of the same dimensions as compact discs (CDs), but store more than six times as much data.

d. Flash Memory: - Flash memory is a non-volatile computer storage that can be electrically erased and reprogrammed. It is a technology that is primarily used in memory cards and USB flash drives for general storage and transfer of data between computers and other digital products. It is a specific type of EEPROM (Electrically Erasable Programmable Read-Only Memory) that is erased and programmed in large blocks; in early flash the entire chip had to be erased at once.

5. Memory Devices: - Those devices which can not hold storage permanently stored by user or devices which can hold data permanently but cab not be modified by user are called memory devices. RAM and ROM are most commonly used memory devices, explained below:

Introduction to I.T by Nasir Ali Shah 7 [email protected]

Page 8: Information Technology

The Zenith Institute of Computer Science Shikarpur

a. RAM: - Random Access Memory (RAM) memory is "volatile" which means that the information stored in the RAM will be lost once the power to it is removed.

b. ROM: - Read Only Memory (ROM) is non volatile in that its contents are not lost when power to it is removed. All ROMs can be programmed at least once. ROMs are programmed by having "1"s and "0"s etched into their semiconductors at the time of manufacturing.

6. Communication Devices: - communication refers to sharing of ideas and in computer science communication is done through various technologies. Devices that are used for communication are known as communication devices.

a. Modem: - Short for modulator-demodulator. A modem is a device or program that enables a computer to transmit data over, for example, telephone or cable lines. Computer information is stored digitally, whereas information transmitted over telephone lines is transmitted in the form of analog waves. A modem converts between these two forms.

b. N.I.C: - A network interface card, more commonly referred to as a NIC, is a device that allows computers to be joined together in a LAN, or local area network. Networked computers communicate with each other using a given protocol or agreed-upon language for transmitting data packets between the different machines, known as nodes. The network interface card acts as the liaison for the machine to both send and receive data on the LAN.

SoftwareThe set of instruction or programs of computer system, usually programmed to control computer

system or to complete our demands are referred as software. Electronic instructions or sense to computer for communicating with users are software. Software, tells the hardware what to do and how to do it. If you were to think of a computer as a living being, then the hardware would be the body that does things like seeing with eyes, lifting objects, and filling the lungs with air; the software would be the intelligence, interpreting the images that come through the eyes, telling the arms how to lift objects, and forcing the body to fill the lungs with air. Countless software are available in market but we can categorize them into two categories:

1. System Software: - It helps in running the computer hardware and the computer system. System software is a collection of operating systems; devise drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer.

a. Operating System: - An operating system is a software component of a computer system that is responsible for the management of various activities of the computer and the sharing of computer resources. It hosts the several applications that run on a computer and handles the operations of computer hardware.

b. Network Operating System (NOS): - A 'networking operating system' is an operating system that contains components and programs that allow a computer on a network to serve requests from other computer for data and provide access to other resources such as printer and file systems.

c. Utility Software: - Utility software is a kind of system software designed to help analyze, configure, optimize and maintain the computer. A single piece of utility software is usually called a utility or tool.

2. Application Software: - It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated for specialized tasks to be performed by the user, are other examples of application software. Application software is categorized in two broad categories.

a. Languages: - A programming language is an artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

b. Packages: - A software package refers to computer software packaged in an archive format to be installed by a package management system or a self-sufficient installer. This software is also known as ready-made software, which are used to solve any specific task or complete any demand. Packages have also different types like wordprocessor, spreadsheet, database management system etc.

Introduction to I.T by Nasir Ali Shah 8 [email protected]

Page 9: Information Technology

The Zenith Institute of Computer Science Shikarpur

Information technology–The definitionsInformation technology (IT) is "the study, design, development, implementation, support or

management of computer-based information systems, particularly software applications and computer hardware." IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit, and securely retrieve information.

Today, the term information has ballooned to encompass many aspects of computing and technology, and the term has become very recognizable. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include data management, networking, engineering computer hardware, database and software design, as well as the management and administration of entire systems.

When computer and communications technologies are combined, the result is information technology, or "infotech". Information technology is a general term that describes any technology that helps to produce, manipulate, store, communicate, and/or disseminate information. Presumably, when speaking of Information Technology (IT) as a whole, it is noted that the use of computers and information are associated.

Elements of Information TechnologyElement can be defined as a part of information technology, what elements are involved in the

field of information technology are given below:

1. Data & Information: -The term data means groups of information that represent the qualitative or quantitative attributes of a variable or set of variables. Data (plural of "datum", which is seldom used) are typically the results of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are often viewed as the lowest level of abstraction from which information and knowledge are derived. Where information is known as processed data, which gives complete sense.

2. Hardware & Software: - These are define above

3. Live ware: - Liveware was used in the computer industry as early as 1966 to mean "computer people". It can also be defined as any user of computer/machine is known as live ware and user can be a human or any other machine.

4. Procedures: - A procedure is a specified series of actions or operations which have to be executed in the same manner in order to always obtain the same result under the same circumstances (for example, emergency procedures). Less precisely speaking, this word can indicate a sequence of activities, tasks, steps, decisions, calculations and processes, that when undertaken in the sequence laid down produces the described result, product or outcome. A procedure usually induces a change. It is in the scientific method.

Communication TechnologyCommunication is reffered as “Sharing of ideas between two points”, and for sharing ideas there

must be a medium. Because in communication a sender, a message and a reciever are required. So the technology used to share the contents is known as Communication Technology.

Communication can be done in three ways those are known as communication modes, which are:

1. Simplex: - In simplex operation, a network cable or communications channel can only send information in one direction; it's a “one-way street”. Simplex operation is also used in special types of technologies, especially ones that are asymmetric. For example, one type of satellite Internet access sends data over the satellite only for downloads, while a regular dial-up modem is used for upload to the service provider. In this case, both the satellite link and the dial-up connection are operating in a simplex mode.

2. Half-duplex: - In Half-duplex data transmission means that data can be transmitted in both

directions on a signal carrier, but not at the same time. Technologies that employ half-duplex operation are capable of sending information in both directions between two nodes, but only one direction or the other can be utilized at a time. This is a fairly common mode of operation when there is only a single network medium (cable, radio frequency and so forth) between devices.

3. Full-duplex: - In telecommunication, duplex communication means that both ends of the communication can send and receive signals at the same time. Full-duplex communication is the same thing. Full-duplex data transmission means that data can be transmitted in both directions on a signal carrier at the same time.

Computer TechnologyComputer technology or computing technology is the study of the theoretical foundations of

information and computation, and of practical techniques for their implementation and application in computer systems. It is frequently described as the systematic study of algorithmic processes that create,

Introduction to I.T by Nasir Ali Shah 9 [email protected]

Page 10: Information Technology

The Zenith Institute of Computer Science Shikarpur

describe and transform information. Computer technology has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.

Technological ConvergenceTechnological convergence is the trend of technologies to merge into new technologies that bring

together a yriad of media. While historically, technology handled one medium or accomplished one or two tasks, through technological convergence, devices are now able to present and interact with a wide array of media.

In the past, for example, each entertainment medium had to be played on a specific device. Video was played on a television by using a video player of some sort, music was played on a tape deck or compact disc player, radio was played on an AM/FM tuner, and video games were played through a console of some sort. Technological convergence in the last few years has resulted in devices that not only interact with the media they are primarily designed to handle, but also with a number of other formats. For example, the XBox video game console has as its primary purpose the playing of console games, but it is also able to play back video and music and to connect to the Internet. Similarly, most modern DVD players are capable not only of playing DVDs, but also of playing music CDs, displaying photos from photo CDs, playing encoded video in formats such as DIVX or VCD, and playing DVD music.

Role of Computer in SocietyComputer have made life easier for the human race. Human being today take for granted the great

impact the computer has on their live, making things easier, faster and more convnient. Computer helped the world a lot and helped us take a large step into the future. Almost anything you know is run or made by computer, industry, cars, jets and lots of things you use in daily life. Computers are the most important innovations in history, without computers the world would not be able to function in the manner today that people are accustom to.

Before there were electronic computer most of the tasks we do with a computer were done in other ways. Computers today have replaced many of the roles that people had once done manually. This is both a positive and a negative, people are losing the jobs they once fulfilled and being replaced by this intelligent machine. It is also creating new opportunities for technology in the workplace and creating new types of jobs involving the comptuer. So comptuer is the most useful machine and being used by most of the humans.

Role of Computers in BusinessComputers in business can help to make the workplace better connected and more efficient.

Computers in business could also disturb business communication. The effect computer technology has on business depends on the use of the technology in the workplace. Inside the workplace computer technology should be used to aid, not substitute for, business communication. For example, use email to set up a meeting, rather than using email to share important information, replacing meetings. Computer technology gives businesses new marketing tools, such as websites and new mediums for advertising. Businesses can use these tools to expand their marketplace. The Internet can serve as a distraction to workers. Using the internet at work for personal use, such as checking email, leads to lost time for companies. Whether computers have a positive or negative effect on business depends on how management decides to implement computers in the workplace.

Reasons to use the comptuerThere are lots of reasons to use computer with lots of aspects. Simply we can say different kinds

of users have different reasons to use computer. If we take this as a student so a student will has an educational motive to use computer and computer can help him in every respect from a simple answer to a huge essay, from notes to every kind of books can be got with help of computer and these all are possible if Internet is available or on other hand student can prepare notes, presentations etc. If we take it as a businessman so computer will be more beneficial than any other source of knowledge. Because with the help of internet we can access every corner of world for information regarding to the shares as well as products and raw material.

From an end user to a professional user most of peoples are getting benefits from computer. With Internet we can learn most of the things we want from an application wrinting to a complicated composing, from listening any language to speaking and understanding. Another main reason that now-a-days educational is being given through computer. Get an example of Virtual University it is creating its campuses in different cities but teaching student virtually with the help Internet. Reasons would not have an end but whatever is defined is enough to understand reasons to use computer.

Introduction to I.T by Nasir Ali Shah 10 [email protected]

Page 11: Information Technology

The Zenith Institute of Computer Science Shikarpur

Brief History of Computer

In The Beginning…The history of computers starts out about 2000 years ago, at the birth of the abacus, a wooden rack

holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibniz invented a special stepped gear mechanism for introducing the addend digits, and this is still being used. The prototypes made by Pascal and Leibniz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements included:

Accumulation of partial results

Storage and automatic reentry of past results (A memory function)

Printing of the results

Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science.

Charles BabbageWhile Thomas of Colmar was developing the desktop calculator, a series of very interesting

developments in computers was started in Cambridge, England, by Charles Babbage (left, of which the computer store “Babbages, now GameStop, is named), a mathematics professor. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically.

He began to design an automatic mechanical calculating machine, which he called a difference engine. By 1822, he had a working model to demonstrate with. With financial help from the British government, Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea — the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn’t be appreciated until a full century later. The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general – purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed. The analytical engine was soon to use punched cards (similar to those used in a Jacquard loom), which would be read into the machine from several different Reading Stations. The machine was supposed to operate automatically, by steam power, and require only one person there. Babbage’s computers were never finished. Various reasons are used for his failure. Most used is the lack of precision machining techniques at the time. Another speculation is that Babbage was working on a solution of a problem that few people in 1840 really needed to solve. After Babbage, there was a temporary loss of interest in automatic digital computers. Between 1850 and 1900 great advances were made in mathematical physics, and it came to be known that most observable dynamic phenomena can be identified by differential equations(which meant that most events occurring in nature can be measured or described in one equation or another), so that easy means for their calculation would be helpful. Moreover, from a practical view, the availability of steam power caused manufacturing (boilers), transportation (steam engines and boats), and commerce to prosper and led to a period of a lot of engineering achievements. The designing of railroads, and the making of steamships, textile mills, and bridges required differential calculus to determine such things as:

Center of gravity, center of buoyancy, moment of inertia, stress distributions.

Even the assessment of the power output of a steam engine needed mathematical integration. A strong need thus developed for a machine that could rapidly perform many repetitive calculations.

Use of Punched Cards by HollerithA step towards automated computing was the development of punched cards, which were first

successfully used with computers in 1890 by Herman Hollerith (left) and James Powers, who worked for the US. Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of

Introduction to I.T by Nasir Ali Shah 11 [email protected]

Page 12: Information Technology

The Zenith Institute of Computer Science Shikarpur

cards and accessed when needed. These advantages were seen by commercial companies and soon led to the development of improved punch-card using computers created by International Business Machines (IBM), Remington (yes, the same people that make shavers), Burroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion — like turning the wheels of an adding machine. Such systems included features to:

feed in a specified number of cards automatically, add, multiply, and sort, feed out cards with punched results

As compared to today’s machines, these computers were slow, usually processing 50 – 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of I/O, and memory storage on a huge scale. For more than 50 years after their first use, punched card machines did most of the world’s first business computing, and a considerable amount of the computing work in science.

Electronic Digital ComputersThe start of World War II produced a large need for computer capacity, especially for the military.

New weapons were made for which trajectory tables and other essential data were needed. In 1942, John P. Eckert, John W. Mauchly (left), and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high – speed electronic computer to do the job. This machine became known as ENIAC (Electrical Numerical Integrator And Calculator) The size of ENIAC’s numerical “word” was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable instructions making up a program were embodied in the separate “units” of ENIAC, which were plugged together to form a “route” for the flow of information.

These connections had to be redone after each computation, together with presetting function tables and switches. This “wire your own” technique was inconvenient (for obvious reasons), and with only some latitude could ENIAC be considered programmable. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is commonly accepted as the first successful high – speed electronic digital computer (EDC) and was used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC’s basic digital concepts, the claim being made that another physicist, John V. Atanasoff (left) had already used basically the same ideas in a simpler vacuum – tube device he had built in the 1930’s while at Iowa State College. In 1973 the courts found in favor of the company using the Atanasoff claim.

The Modern Stored Program EDCFascinated by the success of ENIAC, the mathematician John Von Neumann (left) undertook, in

1945, an abstract study of computation that showed that a computer should have a very simple, fixed physical structure, and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new awareness of how practical, yet fast computers should be organized and built. These ideas, usually referred to as the stored – program technique, became essential for future generations of high – speed digital computers and were universally adopted.

The Stored – Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very – high – speed operation attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy. Arrangements must be made, therefore, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently.

Von Neumann met these two needs by making a special type of machine instruction, called a Conditional control transfer – which allowed the program sequence to be stopped and started again at any point – and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be arithmetically changed in the same way as data. As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in

Introduction to I.T by Nasir Ali Shah 12 [email protected]

Page 13: Information Technology

The Zenith Institute of Computer Science Shikarpur

“libraries” and read into memory only when needed. Thus, much of a given program could be assembled from the subroutine library.

The all – purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an “errand runner” for the overall process. As soon as the advantage of these techniques became clear, they became a standard practice.

The first generation of modern programmed electronic computers to take advantage of these improvements were built in 1947. This group included computers using Random – Access – Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched – card or punched tape I/O devices and RAM’s of 1,000 – word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Some of them could perform multiplications in 2 to 4 MU seconds.

Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first – generation stored – program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. They were usually programmed in ML, although by the mid 1950’s progress had been made in several aspects of advanced programming. This group of computers included EDVAC (above) and UNIVAC (right) the first commercially available computers.

Advances in the 1950’sEarly in the 50’s two important engineering discoveries changed the image of the electronic –

computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor – Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960’s, with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories – staffed with many programmers and support personnel.

This situation led to modes of operation enabling the sharing of the high potential available. One such mode is batch processing, in which problems are prepared and then held ready for computation on a relatively cheap storage medium. Magnetic drums, magnetic – disk packs, or magnetic tapes were usually used. When the computer finishes with a problem, it “dumps” the whole problem (program and results) on one of these peripheral storage units and starts on a new problem. Another mode for fast, powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such rapid succession that each job runs as if the other jobs did not exist, thus keeping each “customer” satisfied. Such operating modes need elaborate executable programs to attend to the administration of the various tasks.

Advances in the 1960’sIn the 1960’s, efforts to design and develop the fastest possible computer with the greatest capacity

reached a turning point with the LARC machine, built for the Livermore Radiation Laboratories of the University of California by the Sperry – Rand Corporation, and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and multiplied in 10 Greek MU seconds. Stretch was made with several degrees of memory having slower access for the ranks of greater capacity, the fastest access time being less then 1 Greek MU Second and the total capacity in the vicinity of 100,000,000 words. During this period, the major computer manufacturers began to offer a range of capabilities and prices, as well as accessories such as:

Consoles, Card Feeders, Page Printers, Cathode – ray – tube displays, Graphing devices,

These were widely used in businesses for such things as:

Accounting, Payroll, Inventory control, Ordering Supplies, Billing

CPU’s for these uses did not have to be very fast arithmetically and were usually used to access large amounts of records on file, keeping these up to date. By far, the most number of computer systems were sold for the more simple uses, such as hospitals (keeping track of patient records, medications, and treatments given). They were also used in libraries, such as the National Medical Library retrieval system, and in the Chemical Abstracts System, where computer records on file now cover nearly all known chemical compounds.

Introduction to I.T by Nasir Ali Shah 13 [email protected]

Page 14: Information Technology

The Zenith Institute of Computer Science Shikarpur

More Recent AdvancesThe trend during the 1970’s was, to some extent, moving away from very powerful, single –

purpose computers and toward a larger range of applications for cheaper computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, now used computers of smaller capability for controlling and regulating their jobs.

In the 1960’s, the problems in programming applications were an obstacle to the independence of medium sized on-site computers, but gains in applications programming language technologies removed these obstacles. Applications languages were now available for controlling a great range of manufacturing processes, for using machine tools with computers, and for many other things. Moreover, a new revolution in computer hardware was under way, involving shrinking of computer-logic circuitry and of components by what are called large-scale integration (LSI techniques.

In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts would increase speed and efficiency and by that, improve performance, if they could only find a way to do this. About 1960 photo printing of conductive circuit boards to eliminate wiring became more developed. Then it became possible to build resistors and capacitors into the circuitry by the same process. In the 1970’s, vacuum deposition of transistors became the norm, and entire assemblies, with adders, shifting registers, and counters, became available on tiny “chips.”

In the 1980’s, very large scale integration (VLSI), in which hundreds of thousands of transistors were placed on a single chip, became more and more common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The “shrinking” trend continued with the introduction of personal computers (PC’s), which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. Many companies, such as Apple Computer and Radio Shack, introduced very successful PC’s in the 1970s, encouraged in part by a fad in computer (video) games. In the 1980s some friction occurred in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of semiconductor chips, the Intel and Motorola Corporations were very competitive into the 1980s, although Japanese firms were making strong economic advances, especially in the area of memory chips.

By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data at a time, could process about 4,000,000 instructions per second. Microprocessors equipped with read-only memory (ROM), which stores constantly used, unchanging programs, now performed an increased number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine diagnosis, and production-line inspection duties. Cray Research and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s.

In the early 1980s, however, the Japanese government announced a gigantic plan to design and build a new generation of supercomputers. This new generation, the so-called “fifth” generation, is using new technologies in very large integration, along with new programming languages, and will be capable of amazing feats in the area of artificial intelligence, such as voice recognition.

Progress in the area of software has not matched the great advances in hardware. Software has become the major cost of many systems because programming productivity has not increased very quickly. New programming techniques, such as object-oriented programming, have been developed to help relieve this problem. Despite difficulties with software, however, the cost per calculation of computers is rapidly lessening, and their convenience and efficiency are expected to increase in the early future. The computer field continues to experience huge growth. Computer networking, computer mail, and electronic publishing are just a few of the applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful computers offering the promise that in the near future, computers or terminals will reside in most, if not all homes, offices, and schools.

Types of ComputerSince the invention of computers from first generation and fourth generation computers, they have

been classified according to their types and how they operate that is input, process and output information. Below you will get a brief discussion on various types of Computers we have and computer types can be divided into 3 categories according to electronic nature. Types of computers are classified according to how a particular Computer functions. These computer types are

Analogue ComputersAnalogue types of Computer uses what is known as analogue signals that

are represented by a continuous set of varying voltages and are used in scientific research centers?, hospitals and flight centers

With analogue types of computer no values are represented by physical measurable quantities e.g. voltages. Analogue computer types program arithmetic and logical operations by measuring physical changes i.e. temperatures or pressure.

Digital ComputerWith these types of computers operation are on electrical input that

can attain two inputs, states of ON=1 and state of OFF = 0. With digital type

Introduction to I.T by Nasir Ali Shah 14 [email protected]

Page 15: Information Technology

The Zenith Institute of Computer Science Shikarpur

of computers data is represented by digital of 0 and 1 or off state and on state. Digital computer type recognizes data by counting discrete signal of (0 0r 1), they are high speed programmable; they compute values and stores results. After looking at the Digital computer type and how it functions will move to the third computer type as mentioned above.

Hybrid ComputerHybrid computer types are very unique, in the sense that they combined

both analogue and digital features and operations. With Hybrid computers operate by using digital to analogue converter and analogue to digital converter. By linking the two types of computer above you come up with this new computer type called Hybrid.

Classification of Digital ComputerComputers can be generally classified by size and power as follows:

SupercomputerSupercomputer is a broad term for one of the fastest computers currently available.

Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). Perhaps the best known supercomputer manufacturer is Cray Research.

MainframeMainframe was a term originally referring to the cabinet containing the central processor unit or

"main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.

 MinicomputerIt is a midsize computer. In the past decade, the distinction

between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously.

 WorkstationIt is a type of computer used for engineering applications (CAD/CAM), desktop publishing,

software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Workstations generally come with a large, high-resolution graphics screen, at large amount of RAM, built-in network support, and a graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems.

 Personal computerIt can be defined as a small, relatively inexpensive computer

designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and

Introduction to I.T by Nasir Ali Shah 15 [email protected]

Page 16: Information Technology

The Zenith Institute of Computer Science Shikarpur

database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet.

Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal characteristics of personal computers are that they are single-user systems and are based on microprocessors. However, although personal computers are designed as single-user systems, it is common to link them together to form a network. In terms of power, there is great variety. At the high end, the distinction between personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and DEC.

Notebook computerAn extremely lightweight personal computer. Notebook computers

typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours.

Laptop computerA laptop is personal computer, designed for mobile use and small

and light enough to sit on a person's lap while in use. A laptop integrates most of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, and/or a pointing stick), speakers, and often including a battery, into a single small and light unit. The rechargeable battery (if present) is charged from an AC adapter and typically stores enough energy to run the laptop for two to three hours in its initial state, depending on the configuration and power management of the computer.

Hand-held computerA portable computer that is small enough to be held in one’s hand. Although

extremely convenient to carry, handheld computers have not replaced notebook computers because of their small keyboards and screens. The most popular hand-held computers are those that are specifically designed to provide PIM (personal information manager) functions, such as a calendar and address book. Some manufacturers are trying to solve the small keyboard problem by replacing the keyboard with an electronic pen. However, these pen-based devices rely on handwriting recognition technologies, which are still in their infancy. Hand-held computers are also called PDAs, palmtops and pocket computers.

PalmtopA small computer that literally fits in your palm, Compared to full-

size computers, palmtops are severely limited, but they are practical for certain functions such as phone books and calendars. Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs. Because of their small size, most palmtop computers do not include disk drives. However, many contain PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Palmtops are also called PDAs, hand-held computers and pocket computers.

PDAShort for personal digital assistant, a handheld device that combines

computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, fax sender, and personal organizer. Unlike portable computers, most PDAs are pen-based, using a stylus rather than a keyboard for input. This means that they also incorporate handwriting recognition features. Some PDAs can also react to voice input by using voice recognition technologies. The field of PDA was pioneered by Apple Computer, which introduced the Newton Message Pad in 1993. Shortly thereafter, several other manufacturers offered similar products. To date, PDAs have had only modest success in the marketplace, due to their high price tags and limited applications. However, many experts believe that PDAs will eventually become common gadgets.

PDAs are also called palmtops, hand-held computers and pocket computers.

Introduction to I.T by Nasir Ali Shah 16 [email protected]

Page 17: Information Technology

The Zenith Institute of Computer Science Shikarpur

Number System–The definition:There are many ways to represent the same numeric value. Long ago, humans used sticks to count,

and later learned how to draw pictures of sticks in the ground and eventually on paper. So, the number 5 was first represented as: |    |    |    |    |    (for five sticks).

Later on, the Romans began using different symbols for multiple numbers of sticks: |    |    |   still meant three sticks,  but a   V   now meant five sticks, and an   X   was used to represent ten of them. Using sticks to count was a great idea for its time. and using symbols instead of real sticks was much better. Representing numbers in different ways is commonly known as number system

Decimal System Most people today use decimal representation to count. In the decimal system there are 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9These digits can represent any value, for example:754.

The value is formed by the sum of each digit, multiplied by the base (in this case it is 10 because there are 10 digits in decimal system) in power of digit position (counting from zero):

Position of each digit is very important! for example if you place "7" to the end:

547it will be another value:

Important note: any number in power of zero is 1, even zero in power of zero is 1:

Binary System Computers are not as smart as humans are (or not yet), it's easy to make an electronic machine with two states: on and off, or 1 and 0.

Computers use binary system, binary system uses 2 digits: 0, 1

And thus the base is 2.

Each digit in a binary number is called a BIT, 4 bits form a NIBBLE, 8 bits form a BYTE, two bytes form a WORD, two words form a DOUBLE WORD (rarely used):

There is a convention to add "b" in the end of a binary number or enclose with parathesis and give base two, this way we can determine that 101b is a binary number with decimal value of 5.

The binary number 10100101b (10100101)2 or equals to decimal value of 165:

Introduction to I.T by Nasir Ali Shah 17 [email protected]

Page 18: Information Technology

The Zenith Institute of Computer Science Shikarpur

Octal Number System

The octal, or base 8, number system is a common system used with computers. Because of its relationship with the binary system, it is useful in programming some types of computers.

It consist of eight digits 0, 1, 2, 3, 4, 5, 6, 7

Look closely at the comparison of binary and octal number systems in picutre given below. You can see that one octal digit is the equivalent value of three binary digits.

Hexadecimal System

Hexadecimal System uses 16 digits:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

And thus the base is 16.

Hexadecimal numbers are compact and easy to read. It is very easy to convert numbers from binary system to hexadecimal system and vice-versa, every nibble (4 bits) can be converted to a hexadecimal digit using this table:

There is a convention to add "h" in the end of a hexadecimal number, enclose with parathesis and give base 16,this way we can determine that 5Fh or (5F)16 is a hexadecimal number with decimal value of 95.

We also add "0" (zero) in the beginning of hexadecimal numbers that begin with a letter (A..F), for example 0E120h.

The hexadecimal number 1234h is equal to decimal value of 4660:

Converting from Decimal System to Any Other In order to convert from decimal system, to any other system, it is required to divide the decimal value by the base of the desired system, each time you should remember the result and keep the remainder, the divide process continues until the result is zero.

The remainders are then used to represent a value in that system.

Let's convert the value of 39 (base 10) to Hexadecimal System (base 16):

As you see we got this hexadecimal number: (27)16. All remainders were below 10 in the above example, so we do not use any letters. Here is another more complex example. Let's convert decimal number 43868 to hexadecimal form:

Introduction to I.T by Nasir Ali Shah 18 [email protected]

Decimal(base 10)

Binary(base 2)

Octal(base 16)

0 000 0

1 001 1

2 010 2

3 011 3

4 100 4

5 101 5

6 110 6

7 111 7

Decimal(base 10)

Binary(base 2)

Hexadecimal(base 16)

Decimal(base 10)

Binary(base 2)

Hexadecimal(base 16)

0 0000 08 1000 8

1 0001 19 1001 9

2 0010 210 1010 A

3 0011 311 1011 B

4 0100 412 1100 C

5 0101 513 1101 D

6 0110 614 1110 E

7 0111 715 1111 F

Page 19: Information Technology

The Zenith Institute of Computer Science Shikarpur

The result is 0AB5Ch, we are using the above table to convert remainders over 9 to corresponding letters. Using the same principle we can convert to binary form (using 2 as the divider) and in octal form (using 8 as the divider), convert to hexadecimal number, and then convert it to binary number using the above table:

As you see we got this binary number: 1010101101011100b or (1010101101011100)2

Converting from any other to DecimalThere is a simple formula for converting any number into decimal:

(Number * (Base)digit position) Lets convert a binary number (10100101)2 into decimal:

This formula can be used for hexadecimal as well as for octal.

Lets get another example by converting (1234)16 into decimal using same principle.

Converting Hexadeciaml into OctalThere is no direct way to convert octal into hexadecimal or hexadecimal into octal, they can be converted by using another number system as an intermediary. So we take binary number system, it means that first we should convert the number into binary and then it could be converted into hexadecimal or octal. Let’s understand with an example12AB

Same procedure will be applied for converting octal into hexadecimal but after converting octal into binary the binary value will be distributed into four digits group.

Introduction to I.T by Nasir Ali Shah 19 [email protected]

First we get binary value:

Hexadecimal value9 2 A B

1001 0010 1010 1011Binary Value

(1001001010101011)2

Binary Value(1 001 001 010 101 011)2

Now this value will be distributed in groups of 3 digits as underlined

001 001 001 010 101 011We make groups starting from

right side and if a last digit is less than three so remaining digits will be compensated by adding 0s to

left side

001 001 001 010 101 011Now these groups will be

replaced by there equivalent octal digit

001 001 001 010 101 011

1 1 1 2 5 3

(111253)8

Page 20: Information Technology

The Zenith Institute of Computer Science Shikarpur

System SoftwareSystem software refers to the files and programs that make up your computer's operating system. System files include libraries of functions, system services, drivers for printers and other hardware, system preferences, and other configuration files. The programs that are part of the system software include assemblers, compilers, file management tools, system utilites, and debuggers.

System software is computer software designed to operate the computer hardware and to provide and maintain a platform for running application software.

The most important types of system software are:

The operating system (prominent examples being Microsoft Windows, Mac OS X and Linux), which allows the parts of a computer to work together by performing tasks like transferring data between memory and disks or rendering output onto a display device. It also provides a platform to run high-level system software and application software.

Network operating system is an operating system that is specially designed to run on a network server. Network operating system has ability to manage network because it has built-in network management tools. These operating sytems manage sending, receiving information and have full authority for restricting any user to perform any operation.

Utility software, which helps to analyze, configure, optimize and maintain the computer. In some publications, the term system software is also used to designate software development tools (like a compiler, linker or debugger).

System software is usually not what a user would buy a computer for - instead, it can be seen as the basics of a computer which come built-in or pre-installed. In contrast to system software, software that allows users to do things like create text documents, play games, listen to music, or surf the web is called application software.

Operating System

An operating system (OS) is an interface between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of a computer, that acts as a host for computing applications run on the machine. As a host, one of the purposes of an operating system is to handle the resource allocation and access protection of the hardware. This relieves the application programmers from having to manage these details.

Users may also interact with the operating system with some kind of software user interface like typing commands by using command line interface (CLI) or using a graphical user interface. For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. Operating system includes lots of features for managing resources of computer system and interacting with user. Some of those are:

Features of Operating System

1. Memory Management: - Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.

2. User Interface: - user interface (UI) is everything designed into an information device with which a human being may interact -- including display screen, keyboard, mouse, light pen, the appearance of a desktop etc. In early computers, there was very little user interface except for a few buttons at an operator's console. Later, a user was provided the ability to interact with a computer online and the user interface was a nearly blank display screen with a command line, a keyboard, and a set of commands and computer responses that were exchanged. And, finally, the graphical user interface (GUI) arrived by Microsoft in its Windows operating systems.

a. Command Line Interface: - Command-line interface (CLI) is a mechanism for interacting with a computer operating system or software by typing commands to perform specific tasks. This method of instructing a computer to perform a given task is referred to as "entering" a command: the system waits for the user to conclude the submitting of the text command by pressing the "Enter" key. The CLI is indicated by the presence of

Introduction to I.T by Nasir Ali Shah 20 [email protected]

Page 21: Information Technology

The Zenith Institute of Computer Science Shikarpur

the > prompt, which is preceded by a string that defaults to the Drive and folder. For example:

C:\Program files>

b. Graphical User Interface: - A graphical user interface or GUI (sometimes pronounced gooey) is a type of user interface item that allows people to interact with programs in more ways than typing such as computers; hand-held devices such as MP3 Players, Portable Media Players or Gaming devices; household appliances and office equipment with images rather than text commands. A GUI offers graphical icons, and visual indicators, as opposed totext-based interfaces, typed command labels or text navigation to fully represent the information and actions available to a user. The actions are usually performed through direct manipulation of the graphical elements.

3. Multiprocessing: - Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them. in a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes. A combination of hardware and operating-system software design considerations determine the symmetry (or lack thereof) in a given system. For example, hardware or software considerations may require that only one CPU respond to all hardware interrupts, whereas all other work in the system may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to only one processor (either a specific processor, or only one processor at a time), whereas user-mode code may be executed in any combination of processors. 

4. Multitasking: - Multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling1 which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch2. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs.

5. Multiprogramming: -Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the same time on a uni-processor. Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.

Operating system types

1. A single task operating system allows only one program to run at a time. This was the operating system that was improved on to as multi-tasking operating systems as it was not practical to close one application to open another, example, close a word document to open power point, especially if you are required to copy some texts from word to power point.

2. Multi tasking operating systems enables a single user to have two or more applications open at the same time. It gives the computer the option to decide on how many time slices each program is allocated. The active program gets the most, and the rest is divided according to the factors of which programs are doing tasks although not active, and the last priority is given to programs and applications that are left open but are not doing anything. Multi tasking operating systems can be divided into three general types depending on the type of computer and the type of applications that will be run. These are Real time operating systems, Single user-Multi tasking and Multi user operating systems 

a. Real time operating systems (RTOS) are mainly used to control machinery, scientific instruments, industrial systems, etc. Here the user does not have much control over the functions performed by the RTOS.

b. Single user, multi tasking operating systems are the systems that allow a single user to run different applications at the same time. Windows of Microsoft and Macintosh of Apple are the most commonly used single user, multi tasking operating systems. 

c. Multi user operating systems give access at the same time to the resources on a single computer to many users. Unix is one such operating system. 

The most commonly used operating systems that fall under the above categories are, Windows 95, Windows 98, Windows Me, Windows NT, Windows 2000, Windows XP (Which is an upgrade of all the

1 Scheduling refers to the way processes are assigned to run on the available CPUs, since there are typically many more processes running than there are available CPUs. This assignment is carried out by software known as a scheduler or dispatcher.2 A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system.

Introduction to I.T by Nasir Ali Shah 21 [email protected]

Page 22: Information Technology

The Zenith Institute of Computer Science Shikarpur

earlier and comes in two versions as Home and Professional. The professional version is the same as the home edition but has additional features, like networking and security features.), Windows Vista, Windows CE, Apple Macintosh and Unix etc.

Utility ProgramsA utility program is a type of system software that is used to performs a specific task it is normally used to solve the common problems of software and hardware.

Some examples of utilities programs are as follows:

File Viewer: File viewer is used to view and manage files in computer   systems . All operating systems provide file viewers. Windows explorer is an example of file viewer.

File compressor: File compressor is used to shrink the size of files. It smaller size of file is easy to copy. A large volume of data can be transferred by using file compressor. WinZip is an example of file compressor.

Diagnostic Utilities: Diagnostic utility is used to detect problems of hardware and software. All operating systems provide different diagnostic utilities to manage the computer system.

Disk Scanner: Disk scanner is used to detect physical and logical problems of disk. All operating systems provide disk scanners to manage computer disks.

Antivirus: A type of software that is used to detect and remove viruses is called antivirus software. Antivirus programs contain information about different known viruses. They can detect viruses and remove them.

What is Virus?A computer virus is a computer program that can copy itself and infect a computer. The term

"virus" is also commonly but erroneously used to refer to other types of malware, adware, and spyware programs that do not have the reproductive ability. A true virus can only spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.

The term "computer virus" is sometimes used as a catch-all phrase to include all types of  malware, adware, and spyware programs that do not have the reproductive ability. Malware includes computer viruses, worms, trojans, most rootkits, spyware, dishonest adware, crimeware, and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with computer worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a program that appears harmless but has a hidden agenda. Worms and Trojans, like viruses, may cause harm to either a computer system's hosted data, functional performance, or networking throughput, when they are executed. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or go unnoticed.

Virus TypesLewinsky Virus  

Sucks all the memory out of your computer, them emails everyone about what it did.

Ronald Reagan Virus  

Saves your data, but forgets where it is stored.

Oprah Winfrey Virus  

Your 300 MB hard drive suddenly shrinks to 100 MB and then slowly expands to 200 MB.

Dr. Jack kevorkian virus  

Deletes all old files.

Ellen Degeneres Virus  

Disks can no longer be inserted.

Titanic Virus  

(A strain of the Lewinsky Virus) Your whole computer goes down (but I think "we go on").

Disney Virus  

Everything in your computer goes Goofy.

Prozac Virus  

Screw up your RAM but your processor doesn’t care.

Joey Buttafuoco Virus  

Only attacks minor files.

Introduction to I.T by Nasir Ali Shah 22 [email protected]

Page 23: Information Technology

The Zenith Institute of Computer Science Shikarpur

Application SoftwareApplication software is computer software designed to help the user to perform a particular task.

Such programs are also called software applications, applications or apps. Typical examples are word processors, spreadsheets, media players and database applications. Application software utilizes the capacities of a computer directly to a dedicated task. Application software is able to manipulate text, numbers and graphics. It can be in the form of software focused on a certain single task like word processing, spreadsheet or playing of audio and video files. Application software is of two basic types

Languages: - Language means programming language, which is used to develop different kinds of software. Basically there are two types of languages, which are defined later.

a. Low Level Language

b. High Level Language

Packages: - Package is ready-made software that has some kind of special tools to perform a specific task.

Different Types of Packages

1. Word Processing Software: This software is used to create and edit documents. The most popular examples of this type of software are MS-Word, WordPad, Notepad and other text editors.

2. Database Software: Database is a structured collection of data. A computer database relies on database software to organize the data and enable the database users to achieve database operations. Database software allows the users to store and retrieve data from databases. Examples are Oracle, MS-Access, etc.

3. Spreadsheet Software: Excel, Lotus 1-2-3 and Apple Numbers are some examples of spreadsheet software. Spreadsheet software allows users to perform calculations. They simulate paper worksheets by displaying multiple cells that make up a grid.

4. Multimedia Software: They allow the users to create and play audio and video media. They are capable of playing media files. Audio converters, players, burners, video encoders and decoders are some forms of multimedia software. Examples software include Real Player and Media Player.

5. Presentation Software: The software that is used to display information in the form of a slide show is known as presentation software. This type of software includes three functions, namely, editing that allows insertion and formatting of text, methods to include graphics in the text and a functionality of executing the slide shows. Microsoft PowerPoint is the best example of presentation software.

6. Communication Software: Communication software is used to provide remote access to systems and exchange files and messages in text, audio and/or video formats between different computers or user IDs. This includes terminal emulators, file transfer programs, chat and instant messaging programs, as well as similar functionality integrated within MUDs.

7. Custom Software: Custom software (also known as Bespoke software) is a type of software that is developed either for a specific organization or function that differs from or is opposite of other already available software. It is generally not targeted to the  mass market, but usually created for companies, business entities, and organizations. Custom software is also when companies or governments pay for customized software for budget or project managing.

SharewareMany newcomers to computing think that freeware and shareware is the same thing. Unlike freeware, shareware is not free. Instead, shareware authors let you try their products for free for few days, typically 30 to 90 days. If you decide to keep using the programs, you are obligated to pay for them. Usually, the cost of paying for shareware programs is far less the price of software bought from retail outlets.

FreewareFreeware software is usually fully functional software, which is available for an unlimited period of time. This software is distributed without any monetary benefits. It can also be in the form of proprietary software, which is given at zero cost.

Public DomainThe public domain is an intellectual property designation for the range of content that is not owned or controlled by anyone. These materials are "public property", and available for anyone to use freely for any purpose. The public domain can be defined in contrast to several forms of intellectual property. Furthermore, the laws of various countries define the scope of the public domain differently; making it necessary to specify which jurisdiction's public domain is being discussed.

Introduction to I.T by Nasir Ali Shah 23 [email protected]

Page 24: Information Technology

The Zenith Institute of Computer Science Shikarpur

Copy RightCopyright is the set of exclusive rights granted to the author or creator of an original work, including the right to copy, distribute and adapt the work. Copyright lasts for a certain time period after which the work is said to enter the public domain. Copyright applies to a wide range of works that are substantive and fixed in a medium.

Programming LanguagesA programming language is an artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

Many programming languages have some form of written specification of their syntax (form) and semantics (meaning). Some languages are defined by a specification document. For example, the C programming language is specified by an ISO Standard. Other languages, such as Perl, have a dominant implementation that is used as a reference.

Programming languages are divided into two basic types:

1. Low Level Programming Language: - Low-level programming

language is a programming language that provides little or no abstraction from a

computer's instruction set architecture. The word "low" refers to the small or nonexistent

amount of abstraction between the language and machine language; because of this, low-

level languages are sometimes described as being "close to the hardware." A low-level

language does not need a compiler or interpreter to run; the processor for which the

language was written is able to run the code without using either of these.

2. High Level Programming Language: - High-level programming

language is a programming language with strong abstraction from the details of the

computer. In comparison to low-level programming languages, it may use natural

language elements, be easier to use, or be more portable across platforms. Such languages

hide the details of CPU operations such as memory access models and management

of scope. These languages need translators to be converted into machine language.

Translator–the definition:

Translator is defined as a computer program that converts instructions written in one language to another in

terms of computer language. There are three type of translator:

1. Assembler: - Assembler is a translator that translates source code of assembly language into

object code.

2. Compiler: - A compiler is a computer program (or set of programs) that transforms source

code written in a computer language (the source language) into another computer language

(the target language, often having a binary form known as object code). The most common reason

for wanting to transform source code is to create an executable program.

3. Interpreter: - A program that executes instructions written in a high-level language. There

are two ways to run programs written in a high-level language. The most common is

to compile the program; the other method is to pass the program through an interpreter. An

interpreter translates high-level instructions into an intermediate form, which it then executes. In

contrast, a compiler translates high-level instructions directly into machine language. Compiled

programs generally run faster than interpreted programs. The advantage of an interpreter,

however, is that it does not need to go through the compilation stage during which machine

instructions are generated. This process can be time-consuming if the program is long. The

interpreter, on the other hand, can immediately execute high-level programs. For this reason,

interpreters are sometimes used during the development of a program, when a programmer wants

to add small sections at a time and test them quickly. 

Introduction to I.T by Nasir Ali Shah 24 [email protected]

Page 25: Information Technology

The Zenith Institute of Computer Science Shikarpur

Generation of programming languages

A first-generation programming language is a machine-level programming

language. Originally, no translator was used to compile or assemble the first-generation language. The first-

generation programming instructions were entered through the front panel switches of the computer

system.

The main benefit of programming in a first-generation programming language is that the code a

user writes can run very fast and efficiently, since it is directly executed by the CPU. However, machine

language is a lot more difficult to learn than higher generational programming languages, and it is far more

difficult to edit if errors occur. 

A second-generation programming language is a generational way to

categorize assembly languages. The term was coined to provide a distinction from higher level third-

generation programming languages (3GL) such as COBOL and earlier machine code languages. Second-

generation programming languages have the following properties:

The code can be read and written by a programmer. To run on a computer it must be converted into a machine readable form, a process called assembly.

The language is specific to a particular processor family and environment.

A third-generation programming language (3GL) is a refinement of

a second-generation programming language. The second generation of programming languages brought

logical structure to software. The third generation brought refinements to make the languages more

programmer-friendly. This includes features like good support for aggregate data types, and expressing

concepts in a way that favors the programmer, not the computer (e.g. no longer needing to state the length

of multi-character (string) literals in Fortran). A third generation language improves over a second

generation language by having the computer take care of non-essential details, not the programmer. High

level language is a synonym for third-generation programming language.

First introduced in the late 1950s, Fortran, ALGOL and COBOL are early examples of this sort of language.Most "modern" languages (BASIC, C, C++, C#, Pascal, and Java) are also third-generation languages.

A fourth-generation programming language (1970s-1990)

(abbreviated 4GL) is a programming language or programming environment designed with a specific

purpose in mind, such as the development of commercial business software. In the evolution of computing,

the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL

was followed by efforts to define and use a 5GL.

The natural-language, block-structured mode of the third-generation programming

languages improved the process of software development. However, 3GL development methods can be

slow and error-prone. It became clear that some applications could be developed more rapidly by adding a

higher-level programming language and methodology which would generate the equivalent of very

complicated 3GL instructions with fewer errors. In some senses, software engineering arose to

handle 3GL development. 4GL and 5GL projects are more oriented toward problem solving and systems

engineering.

A fifth-generation programming language (abbreviated 5GL) is

a programming language based around solving problems using constraints given to the program, rather than

using an algorithm written by a programmer. Most constraint-based and logic programming languages and

some declarative languages are fifth-generation languages.

While fourth-generation programming languages are designed to build specific programs, fifth-

generation languages are designed to make the computer solve a given problem without the programmer.

This way, the programmer only needs to worry about what problems need to be solved and what conditions

need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-

Introduction to I.T by Nasir Ali Shah 25 [email protected]

Page 26: Information Technology

The Zenith Institute of Computer Science Shikarpur

generation languages are used mainly in artificial intelligence research. Prolog, OPS5, and Mercury are

examples of fifth-generation languages.

Introduction to I.T by Nasir Ali Shah 26 [email protected]

Page 27: Information Technology

The Zenith Institute of Computer Science Shikarpur

Information system In a general sense, the term Information System (IS) refers to a system of people, data records and activities that process the data and information in an organization, and it includes the organization's manual and automated processes.

OR

Data system: system consisting of the network of all communication channels used within an organization

OR A human and technical infrastructure for the storage, processing, transmission, input and output of information

OR

A system that collects and maintains information and provides it to people.

Types of information system in current researchThe management of Information is facilitated by the use of Information Technology and Information Sciences. The popular Information Management Systems can be listed as follows:

1. Document management system (DMS)

The DMS is focused primarily on the storage and retrieval of self-contained electronic data resources in the document form. Generally, The DMS is designed to help the organizations to manage the creation and flow of documents through the provision of a centralized repository. The workflow of the DMS encapsulates business rules and metadata.

2. Content management system (CMS)

The CMS assist in the creation, distribution, publishing, and management of the enterprise information. These systems are generally applicable on the online content which is dynamically managed as a website on the internet or intranet. The CMS system can also be called as ‘web content management’ (WCM).

3. Library management system (LMS)

Library management systems facilitate the library technical functions and services that include tracking of the library assets, managing CDs and books inventory and lending, supporting the daily administrative activities of the library and the record keeping.

4. Records management system (RMS)

The RMS are the recordkeeping systems which capture, maintain and provide access to the records including paper as well as electronic documents, efficiently and timely.

5. Digital imaging system (DIS)

The DIS assists in automation of the creation of electronic versions of the paper documents such as PDFs or Tiffs. So created Electronic documents are used as an input to the records management systems.

6. Learning management system (LMS)

Learning management systems are generally used to automate the e-learning process which includes the administrative process like registering students, managing training resources, creating courseware, recording results etc.

7. Geographic information system (GIS)

The GIS are special purpose, computer-based systems that facilitate the capture, storage, retrieval, display and analysis of the spatial data.

Old version of Information SystemsInformation systems differ in their business needs. Also depending upon different levels in organization information systems differ. Three major information systems are

1. Transaction processing systems2. Management information systems

3. Decision support systems

Figure 1.2 shows relation of information system to the levels of organization. The information needs are different at different organizational levels. Accordingly the information can be categorized as: strategic information, managerial information and operational information.

Introduction to I.T by Nasir Ali Shah 27 [email protected]

Page 28: Information Technology

The Zenith Institute of Computer Science Shikarpur

Strategic information is the information needed by top most management for decision making. For example the trends in revenues earned by the organization are required by the top management for setting the policies of the organization. This information is not required by the lower levels in the organization. The information systems that provide these kinds of information are known as Decision Support Systems.

Figure 1.2 - Relation of information systems to levels of organization

The second category of information required by the middle management is known as managerial information. The information required at this level is used for making short term decisions and plans for the organization. Information like sales analysis for the past quarter or yearly production details etc. fall under this category. Management information system (MIS) caters to such information needs of the organization. Due to its capabilities to fulfill the managerial information needs of the organization, Management Information Systems have become a necessity for all big organizations. And due to its vastness, most of the big organizations have separate MIS departments to look into the related issues and proper functioning of the system.

The third category of information is relating to the daily or short term information needs of the organization such as attendance records of the employees. This kind of information is required at the operational level for carrying out the day-to-day operational activities. Due to its capabilities to provide information for processing transaction of the organization, the information system is known as Transaction Processing System or Data Processing System. Some examples of information provided by such systems areprocessing of orders, posting of entries in bank, evaluating overdue purchaser orders etc.

Transaction Processing Systems

TPS processes business transaction of the organization. Transaction can be any activity of the organization. Transactions differ from organization to organization. For example, take a railway reservation system. Booking, canceling, etc are all transactions. Any query made to it is a transaction. However, there are some transactions, which are common to almost all organizations. Like employee new employee, maintaining their leave status, maintaining employees’ accounts, etc.

This provides high speed and accurate processing of record keeping of basic operational processes. These include calculation, storage and retrieval. Transaction processing systems provide speed and accuracy, and can be programmed to follow routines functions of the organization.

Management Information Systems

These systems assist lower management in problem solving and making decisions. They use the results of transaction processing and some other information also. It is a set of information processing functions. It

should handle queries as quickly as they arrive. An important element of MIS is database. A database is a non-redundant collection of interrelated data items that can be processed through application programs and available to many users.

Decision Support Systems

These systems assist higher management to make long term decisions. These type of systems handle unstructured or semi structured decisions. A decision is considered unstructured if there are no clear procedures for making the decision and if not all the factors to be considered in the decision can be readily identified in advance. These are not of recurring nature. Some recur infrequently or occur only once. A decision support system must very flexible. The user should be able to produce customized reports by giving particular data and format specific to particular situations.

Summary of Information Systems

Categories of Information System Characteristics

Transaction Processing System Substitutes computer-based processing for manual procedures. Deals with well-structured processes. Includes record keeping applications.

Management information system Provides input to be used in the managerial decision process. Deals with supporting well structured decision situations. Typical information requirements can be anticipated.

Introduction to I.T by Nasir Ali Shah 28 [email protected]

Page 29: Information Technology

The Zenith Institute of Computer Science Shikarpur

Decision support system Provides information to managers who must make judgments about particular situations. Supports decision-makers in situations that are not well structured.

Organization of dataRaw data is collected during scientific investigations which need to be transformed into some format that allows interpretation and analysis between the variables. Data can be presented in a variety of formats such as tables, graphs, maps, diagrams illustrations, flow charts etc. 

Data Storage HierarchyHierarchy means “chain of steps” and data storage hierarchy means “steps involved in storage of data”.

It can be defined as, a process of storing data in a database and this hierarchy base on following steps.

1. Character: - A single symbol of any alphabet, number system or any special symbol like @, $ or % etc. E.g. A, B, C, 1, 2, 3, @, #, % etc.

2. Field: - Collection of characters, having any information about any entity. Like Name, Age, Address. E.g. N+A+S+I+R NASIR, means when we collect individual characters they form a field.

3. Record: - Collection of inter-related fields about any particular topic.

Like: Name Age Qualification City

Nasir 20 B.Com, DIT Shikarpur

4. Table: - Collection of inter-related records about a particular topic or event. Table is often referred as collection of rows and columns, where every column is referred as field and row is referred as record. This table containing records of employees working in a company

Name Age Qualification City Designation

Nasir 20 B.com, DIT Shikarpur Manager

Sajid 21 M.A, DIT Larkana Accountant

5. Database: - Collection of information about a particular topic, stored in an efficient manner as user may be able to access any piece of information easily. It is also called collection of inter-related tables.

Introduction to I.T by Nasir Ali Shah 29 [email protected]

Collection of tables

Collection of information

Page 30: Information Technology

The Zenith Institute of Computer Science Shikarpur

Overview of CAD/CAMAcronym for computer-aided design/computer-aided manufacturing,computer systems used to design and manufacture products. The termCAD/CAM implies that an engineer can use the system both for designing a product and for controlling manufacturing processes. For example, once a design has been produced with

the CAD component, the design itself can control the machines that construct the part.

CAD

Computer-aided technologies (sometimes abbreviated as CAx[1]) is a broad term describing the use of computer technology to aid in the design, analysis, and manufacture of products.

Advanced CAx tools merge many different aspects of the product lifecycle management (PLM), including design, analysis using finite element analysis(FEA), manufacturing, production planning, product testing using virtual labmodels and visualization, product documentation, product support, etc. CAx encompasses a broad range of tools, both those commercially available and those which are proprietary to the engineering firm.

The term CAD/CAM (computer-aided design and computer-aided manufacturing) is also often used in the context of a software tool covering a number of engineering functions.

CAM

Computer-aided manufacturing (CAM) is the use of computer-based software tools that assist engineers and machinists in manufacturing or prototyping product components and tooling. Its primary purpose is to create a faster production process and components and tooling with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material (thus minimizing waste), while simultaneously reducing energy consumption. CAM is a programming tool that makes it possible to manufacture physical models using computer-aided design (CAD) programs. CAM creates real life versions of components designed within a software package. CAM was first used in 1971 for car body design and tooling.

Virtual RealityVirtual reality (VR) is a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.

Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization have helped popularize the notion. In the book The Metaphysics of Virtual Reality, Michael Heim identifies seven different concepts of Virtual Reality: simulation, interaction, artificiality, immersion, telepresence, full-body immersion, and network communication. The definition still has a certain futuristic romanticism attached. People often identify VR with Head Mounted Displays and Data Suits.

Expert systemAn expert system is software that attempts to provide an answer to a problem, or clarify uncertainties where normally one or more humanexperts would need to be consulted. Expert systems are most common in a specific problem domain, and is a traditional application and/or subfield of artificial intelligence. A wide variety of methods can be used to simulate the performance of the expert however common to most or all are 1) the creation of a knowledge base which uses some knowledge representation formalism to capture the Subject Matter Expert's (SME) knowledge and 2) a process of gathering that knowledge from the SME and codifying it according to the formalism, which is called knowledge engineering. Expert systems may or may not have learning components but a third common element is that once the system is developed it is proven by being placed in the same real world problem solving situation as the human SME, typically as an aid to human workers or a supplement to some information system.

Robotics

Introduction to I.T by Nasir Ali Shah 30 [email protected]

Page 31: Information Technology

The Zenith Institute of Computer Science Shikarpur

Robotics is the engineering science and technology of robots, and their design, manufacture, application, and structural disposition. Robotics is related to electronics,mechanics, and software. Design, construction, and use of machines (robots) to perform tasks done traditionally by human beings. Robots are widely used in such industries as automobile manufacture to perform simple repetitive tasks, and in industries where work must be performed in environments hazardous to humans. Many aspects of robotics involve artificial intelligence; robots may be equipped with the equivalent of human senses such as vision, touch, and the ability to sense temperature.

Introduction to I.T by Nasir Ali Shah 31 [email protected]