Top Banner
Q) What do you understand by SDLC? Explain the steps involved in a s/w devt cycle. What the various types of error that are encountered while developing software? Ans) A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. SDLC is the process of developing information systems through investigation, analysis, design, implementation and maintenance. SDLC is also known as information systems development or application development. SDLC is a systems approach to problem solving and is made up of several phases, each comprised of multiple steps: The software concept - identifies and defines a need for the new system A requirements analysis - analyzes the information needs of the end users The architectural design - creates a blueprint for the design with the necessary specifications for the hardware, software, people and data resources Coding and debugging - creates and programs the final system System testing - evaluates the system's actual functionality in relation to expected or intended functionality. Steps in S/W development cycle : System Engineering and Modelling: In this process we have to identify the projects requirement and main features proposed in the application. Here the development team visits the customer and their system. They investigate the need for possible software automation in the given system. By the end of the investigation study. The team writes a document that holds the specifications for the customer system. Software Requirement Analysis In this software requirements analysis, firstly analysis the requirement for the proposed system. To understand the nature of the program to built, the system engineer must understand the information domain for the software, as well as required functions, performance and the interfacing. From the available information the system engineer develops a list of the actors use
85
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Computers in management

Q) What do you understand by SDLC? Explain the steps involved in a s/w devt cycle. What the various types of error that are encountered while developing software?

Ans) A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

SDLC is the process of developing information systems through investigation, analysis, design, implementation and maintenance. SDLC is also known as information systems development or application development. SDLC is a systems approach to problem solving and is made up of several phases, each comprised of multiple steps:

The software concept - identifies and defines a need for the new system A requirements analysis - analyzes the information needs of the end users The architectural design - creates a blueprint for the design with the necessary

specifications for the hardware, software, people and data resources Coding and debugging - creates and programs the final system System testing - evaluates the system's actual functionality in relation to expected or

intended functionality.

Steps in S/W development cycle :

System Engineering and Modelling:

In this process we have to identify the projects requirement and main features proposed in the application. Here the development team visits the customer and their system. They investigate the need for possible software automation in the given system. By the end of the investigation study. The team writes a document that holds the specifications for the customer system.

Software Requirement Analysis

In this software requirements analysis, firstly analysis the requirement for the proposed system. To understand the nature of the program to built, the system engineer must understand the information domain for the software, as well as required functions, performance and the interfacing. From the available information the system engineer develops a list of the actors use cases and system level requirement for the project. With the help of key user the list of use case and requirement is reviewed. Refined and updated in an iterative fashion until the user is satisfied that it represents the essence of the proposed system.

Systems analysis and design

The design is the process of designing exactly how the specifications are to be implemented. It defines specifically how the software is to be written including an object model with properties and method for each object, the client/server technology, the number of tiers needed for the package architecture and a detailed database design. Analysis and design are very important in the whole development cycle. Any glitch in the design could be very expensive to solve in the later stage of the software development.

Page 2: Computers in management

Code generation

The design must be translated into a machine readable form. The code generation step performs this task. The development phase involves the actual coding of the entire application. If design is performed in a detailed manner. Code generation can be accomplished with out much complicated. Programming tools like compilers, interpreters like c, c++, and java are used for coding .with respect to the type of application. The right programming language is chosen.

Testing

After the coding. The program testing begins. There are different methods are there to detect the error in coding .different method are already available. Some companies are developed they own testing tools

Development and MaintenanceThe development and maintenance is a staged roll out of the new application, this involves installation and initial training and may involve hardware and network upgrades. Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could be happen because of some unexpected input values into the system. In addition, the changes in the system could be directly affecting the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

Errors encountered while developing software:

Page 3: Computers in management
Page 4: Computers in management

Q) What do you understand by I-P-O cycle in data processing? Discuss the relevance of an I-P-O cycle for :

a) Batch processing systemb) On-Line processing system

Page 5: Computers in management

Ans) The Input-Process-Output Model also known as the IPO+S Model is a functional model and conceptual schema of a general system. An IPO chart identifies a program’s inputs, its outputs, and the processing steps required to transform the inputs into the outputs.

The IPOS Cycle is how a computer intakes data, processes the data, outputs information, and then saves the information.

1.InputComputer receives data from input device.

2.ProcessingComputer's central processing unit (CPU) processes the data into information.

3.OutputMeaningful information displayed on monitor or printed out.

4.StorageSaves results to computers hard drive or other types of secondary storage.

Sub-components:

Sub-components of a system can also have their own set of inputs and outputs that may differ from those of the larger system. Typically, outputs of a subcomponent are either input for another sub-component or become part of the ultimate output of the larger system. Whether or not a system is being considered at the macro or micro level determines what a specific item in a system is considered, and this variable level of detail is referred to as scope. Explicit discussions on scope are more common in a technological discussion as sub-units are considered more discretely in the natural sciences. In ecosystems, for example, sub-units are impacts as opposed to objects, namespaces, methods, and scale. Exceptions to this convention are when nominal data points must be encoded for a scientific model simulation.

Batch Processing System:A batch processing system is one where programs and data are collected together in a batch before processing starts.

Each piece of work for a batch processing system is called a job. A job usually consists of a program and the data to be run.

Jobs are stored in job queues until the computer is ready to process them.

There is no interaction between the user and the computer while the program is being run. Computers which do batch processing often operate at night.

Page 6: Computers in management

Example: Payroll - when a company calculates the wages for its workforce and prints pay slips.

Batch processing is execution of a series of programs ("jobs") on a computer without manual intervention.

Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.

Batch processing has these benefits:

It allows sharing of computer resources among many users and programs,

It shifts the time of job processing to when the computing resources are less busy,

It avoids idling the computing resources with minute-by-minute manual intervention and supervision,

By keeping high overall rate of utilization, it better amortizes the cost of a computer, especially an expensive one.

Common batch processing usage

Data processing

A typical batch processing procedure is End of day-reporting (EOD), especially on mainframes. Historically systems were designed to have a batch window where online subsystems were turned off and system capacity was used to run jobs common to all data (accounts, users or customers) on a system. In a bank, for example, EOD jobs include interest calculation, generation of reports and data sets to other systems, print (statements) and payment processing.

Printing

A popular computerized batch processing procedure is printing. This normally involves the operator selecting the documents they need printed and indicating to the batch printing software when, where they should be output and priority of the print job. Then the job is sent to the print queue from where printing daemon sends them to the printer.

Databases

Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications.

Page 7: Computers in management

Images

Batch processing is often used to perform various operations with digital images. There exist computer programs that let one resize, convert, watermark, or otherwise edit image files.

Converting

Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by.

Online Processing System:Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a "transaction" in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions.[1] OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application.

The technology is used in a number of industries, including banking, airlines, mailorder, supermarkets, and manufacturing. Applications include electronic banking, order processing, employee time clock systems, e-commerce, and eTrading

Online transaction processing increasingly requires support for transactions that span a network and may include more than one company. For this reason, new OLTP software uses client/server processing and brokering software that allows transactions to run on different computer platforms in a network.

In large applications, efficient OLTP may depend on sophisticated transaction management software (such as CICS) and/or database optimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database.

For even more demanding Decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on a network. OLTP is often integrated into SOA service-oriented architecture and Web services. Because there is a need for transactions you will need online processing.

Benefits

Online Transaction Processing has two key benefits: simplicity and efficiency. Reduced paper trails and the faster, more accurate forecasts for revenues and expenses are both examples of how OLTP makes things simpler for businesses.

Page 8: Computers in management

Disadvantages

As with any information processing system, security and reliability are considerations. Online transaction systems are generally more susceptible to direct attack and abuse than their offline counterparts. When organizations choose to rely on OLTP, operations can be severely impacted if the transaction system or database is unavailable due to data corruption, systems failure, or network availability issues. Additionally, like many modern online information technology solutions, some systems require offline maintenance which further affects the cost-benefit analysis.

Short Notes:

1) Generation of Languages :

A computer language is the means by which instructions and data are transmitted to computers. Put another way, computer languages are the interface between a computer and a human being. There are various

Page 9: Computers in management

computer languages, each with differing complexities. For example, the information that is understandable to a computer is expressed as zeros and ones (i.e., binary language). However, binary language is incomprehensible to humans. Computer scientists find it far more efficient to communicate with computers in a higher level language.

Computer Languages - First-generation LanguageFirst-generation language is the lowest level computer language. Information is conveyed to the computer by the programmer as binary instructions. Binary instructions are the equivalent of the on/off signals used by computers to carry out operations. The language consists of zeros and ones. In the 1940s and 1950s, computers were programmed by scientists sitting before control panels equipped with …

Computer Languages - Second-generation LanguageAssembly or assembler language was the second generation of computer language. By the late 1950s, this language had become popular. Assembly language consists of letters of the alphabet. This makes programming much easier than trying to program a series of zeros and ones. As an added programming assist, assembly language makes use of mnemonics, or memory aids, which are easier for the human progra…

Computer Languages - Third-generation LanguageThe introduction of the compiler in 1952 spurred the development of third-generation computer languages. These languages enable a programmer to create program files using commands that are similar to spoken English. Third-level computer languages have become the major means of communication between the digital computer and its user. By 1957, the International Business Machine Corporation (IBM) had…

Computer Languages - Fourth-generation LanguageFourth-generation languages attempt to make communicating with computers as much like the processes of thinking and talking to other people as possible. The problem is that the computer still only understands zeros and ones, so a compiler and interpreter must still convert the source code into the machine code that the computer can understand.

2) Computer networks :

Page 10: Computers in management

A Computer Network or simply Network is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users. Networks may be classified according to a wide variety of characteristics.

Development of the network began in 1969, based on designs developed during the 1960s.

Facilitating communications. Sharing hardware. Sharing files, data, and information. Sharing software.

list presents categories used for classifying networks.

Connection method

Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn.

1) Wired technologies

Twisted-pair wireCoaxial

Fiber optic cable

2) Wireless technologies

Terrestrial Microwave

Communications Satellites

Cellular and PCS Systems

Wireless LANs – Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to

100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.

The Wireless Web

Page 11: Computers in management

ScaleNetworks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose. Usage, trust level, and access right often differ between these types of network. For example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization and may include connections to third parties.

Functional relationship (network architecture)

Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., active networking, client-server and peer-to-peer (workgroup) architecture.

Network topology

Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network, star-bus network, tree or hierarchical topology network. Network topology is the coordination by which devices in the network are arrange in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct. Networks may be classified based on the method of data used to convey the data, these include digital and analog networks.

Types of networks

Common types of computer networks may be identified by their scale.

Personal area network

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless connections between devices. The reach of a PAN typically extends to 10 meters.[2] Wired PAN network is usually constructed with USB and Firewire while wireless with Bluetooth and Infrared.[3]

Page 12: Computers in management

Local area network

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines)[4].

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[5]

Home area network

A home area network (HAN) or home network is a residential local area network which is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a CATV or Digital Subscriber Line (DSL) provider.

Campus area network

A campus area network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting.

In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN) (in some cases).

The main aim of a campus area network is to facilitate students accessing internet and university resources. This is a network that connects two or more LANs but that is limited to a specific and

Page 13: Computers in management

contiguous geographical area such as a college campus, industrial complex, office building, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings.

Metropolitan area network

A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.

Wide area network

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

Global area network

A global area network (GAN) is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial WIRELESS local area networks (WLAN).[6]

Virtual private network

A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

Page 14: Computers in management

A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.

Internet

The Internet is a global system of interconnected governmental, academic, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Intranets and extranets

Intranets and extranets are parts or extensions of a computer network, usually a local area network.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g., a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

3) Internet Based computing

Page 15: Computers in management

Cloud computing is a way of computing, via the Internet, that broadly shares computer resources instead of using software or storage on a local computer.

Cloud computing is an outgrowth of the ease-of-access to remote computing sites provided by the Internet.[1]

In concept, it is a paradigm shift whereby details are abstracted from the users who no longer have need of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.[2] Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.[3][4]

The term cloud is used as a metaphor for the Internet, based on the cloud drawing used to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.[5] Typical cloud computing providers deliver common business applications online which are accessed from a web browser, while the software and data are stored on servers.

A technical definition is "a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction."[6] This definition states that clouds have five essential characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.[6] Narrowly speaking, cloud computing is client-server computing that abstract the details of the server away – one requests a service (resource), not a specific server (machine). However, cloud computing may be conflated with other terms, including client-server and utility computing, and the term has been criticized as vague and referring to "everything that we currently do".[7][8][9]

The majority of cloud computing infrastructure, as of 2009, consists of reliable services delivered through data centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs.[10]

Comparisons

Cloud computing can be confused with:

1. Grid computing — "a form of distributed computing and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks"

2. Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity";[11]

3. Autonomic computing — "computer systems capable of self-management".[12]

Page 16: Computers in management

4. Client-server – Client-server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).[13] Cloud computing is a narrower form of client-server, where the details of the server are abstracted – for example, one may not connect to a specific server. However, due to the popularity of the cloud metaphor, "cloud computing" may be used to refer to any form of client-server computing.

4) Programming :

Computer programming (often shortened to programming or coding) is the process of writing, testing, debugging/troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behaviour (customization). The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.

Overview

Within software engineering, programming (the implementation) is regarded as one phase in a software development process.

There is an ongoing debate on the extent to which the writing of programs is an art, a craft or an engineering discipline.[1] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." However, representing oneself as a "Professional Software Engineer" without a license from an accredited institution is illegal in many parts of the world.[citation needed]

Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes. This debate is analogous to that surrounding the Sapir-Whorf hypothesis [2] in linguistics, that postulates that a particular language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.

Said another way, programming is the craft of transforming requirements into something that a computer can execute.

Page 17: Computers in management

5) E-Commerce : Electronic commerce, commonly known as (electronic marketing) e-commerce or eCommerce, consists of the buying and selling of products or services over electronic systems such as the Internet and other computer networks. The amount of trade conducted electronically has grown extraordinarily with widespread Internet usage. The use of commerce is conducted in this way, spurring and drawing on innovations in electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it can encompass a wider range of technologies such as e-mail as well.

A large percentage of electronic commerce is conducted entirely electronically for virtual items such as access to premium content on a website, but most electronic commerce involves the transportation of physical items in some way. Online retailers are sometimes known as e-tailers and online retail is sometimes known as e-tail. Almost all big retailers have electronic commerce presence on the World Wide Web.

Electronic commerce that is conducted between businesses is referred to as business-to-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market). Electronic commerce that is conducted between businesses and consumers, on the other hand, is referred to as business-to-consumer or B2C. This is the type of electronic commerce conducted by companies such as Amazon.com.

Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of the business transactions.

Some common applications related to electronic commerce are the following:

Email Enterprise content management Instant messaging Newsgroups Online shopping and order tracking Online banking Online office suites Domestic and international payment systems Shopping cart software Teleconferencing Electronic tickets

6) Flow – Charting :

Page 18: Computers in management

A flowchart is a common type of diagram, that represents an algorithm or process, showing the steps as boxes of various kinds, and their order by connecting these with arrows. This diagrammatic representation can give a step-by-step solution to a given problem. Data is represented in these boxes, and arrows connecting them represent flow / direction of flow of data. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.

Flowchart building blocks

Examples

A simple flowchart for computing factorial N (N!)

A flowchart for computing factorial N (N!) where N! = (1 * 2 * 3 * ... * N), see image. This flowchart represents a "loop and a half" — a situation discussed in introductory programming textbooks that requires either a duplication of a component (to be both inside and outside the loop) or the component to be put inside a branch in the loop. (Note: Some textbooks recommend against this "loop and a half" since it is considered bad structure, instead a 'priming read' should be used and the loop should return back to the original question and not above it.[7])

Symbols

A typical flowchart from older Computer Science textbooks may have the following kinds of symbols:

Start and end symbols

Represented as circles, ovals or rounded rectangles, usually containing the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit enquiry" or "receive product".

Arrows

Showing what's called "flow of control" in computer science. An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to.

Processing steps

Represented as rectangles. Examples: "Add 1 to X"; "replace identified part"; "save changes" or similar.

Input/Output

Represented as a parallelogram. Examples: Get X from the user; display X.

Conditional or decision

Page 19: Computers in management

Represented as a diamond (rhombus). These typically contain a Yes/No question or True/False test. This symbol is unique in that it has two arrows coming out of it, usually from the bottom point and right point, one corresponding to Yes or True, and one corresponding to No or False. The arrows should always be labeled. More than two arrows can be used, but this is normally a clear indicator that a complex decision is being taken, in which case it may need to be broken-down further, or replaced with the "pre-defined process" symbol.

A number of other symbols that have less universal currency, such as:

A Document represented as a rectangle with a wavy base; A Manual input represented by parallelogram, with the top irregularly sloping up from left to

right. An example would be to signify data-entry from a form; A Manual operation represented by a trapezoid with the longest parallel side at the top, to

represent an operation or adjustment to process that can only be made manually. A Data File represented by a cylinder.

Page 20: Computers in management

Flow Chart for Computing Factorial of N

Page 21: Computers in management

Short Notes:

- “Goal Seek” Analysis:

Goal Seek is used to get a particular result when you're not too sure of the starting value. For example, if the answer is 56, and the first number is 8, what is the second number? Is it 8 multiplied by 7, or 8 multiplied by 6? You can use Goal Seek to find out. We'll try that example to get you started, and then have a go at a more practical example.

Create the following Excel 2007 spreadsheet

In the spreadsheet above, we know that we want to multiply the number in B1 by the number in B2. The number in cell B2 is the one we're not too sure of. The answer is going in cell B3. Our answer is wrong at the moment, because we has a Goal of 56. To use Goal Seek to get the answer, try the following:

From the Excel menu bar, click on Data Locate the Data Tools panel and the What if Analysis item. From the What if

Analysis menu, select Goal Seek The following dialogue box appears:

The first thing Excel is looking for is "Set cell". This is not very well named. It means "Which cell contains the Formula that you want Excel to use". For us, this is cell B3. We have the following formula in B3:

Page 22: Computers in management

= B1 * B2

So enter B3 into the "Set cell" box, if it's not already in there.

The "To value" box means "What answer are you looking for"? For us, this is 56. So just type 56 into the "To value" box

The "By Changing Cell" is the part you're not sure of. Excel will be changing this part. For us, it was cell B2. We're weren't sure which number, when multiplied by 8, gave the answer 56. So type B2 into the box.

You Goal Seek dialogue box should look like ours below:

Click OK and Excel will tell you if it has found a solution:

Click OK again, because Excel has found the answer. Your new spreadsheet will look like this one:

Page 23: Computers in management

As you can see, Excel has changed cell B2 and replace the 6 with a 7 - the correct answer.

Track Changes:

Track Changes is a way for Microsoft Word to keep track of the changes

you make to a document. You can then choose to accept or reject those

changes.

Let's say Bill creates a document and emails it to his colleague, Lee, for

feedback. Lee can edit the document with Track Changes on. When Lee

sends the document back to Bill, Bill can see what changes Lee had

made.

Track Changes is also known as redline, or redlining. This is because

some industries traditionally draw a vertical red line in the margin to

show that some text has changed.

To use Track Changes, you need to know that there are three entirely

separate things that might be going on at any one time:

First, at some time in the past (last week, yesterday, one millisecond

ago), Word might have kept track of the changes you made. It did

this because you turned on Track Changes. Word then remembered

the changes you made to your document, and stored the changes in

your document.

Second, if Word has stored information about changes you've made

to your document, then you can choose to display those changes, or

to hide them. Hiding them doesn't make them go away. It just hides

them from view. (The only way to remove the tracked changes from

your document is to accept or reject them.)

Third, at this very moment in time, Word may be tracking the

changes you make to your document.

Just to make the point:

Word may, or may not, be currently keeping track of the changes you

make.

At the same time, Word may, or may not, have stored  changes you

made to the document at some point in the past.

And, at the same time, Word may, or may not, be displaying those

tracked changes. Turning off (ie, hiding) the tracked changes doesn't

Page 24: Computers in management

remove them. It just hides them. To remove the tracked changes

from the document, you must accept or reject them

World Wide Web:

The World Wide Web, abbreviated as WWW and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them by using hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] He was later joined by Belgian computer scientist Robert Cailliau while both were working at CERN in Geneva, Switzerland. In 1990, they proposed using "HyperText [...] to link and access information of various kinds as a web of nodes in which the user can browse at will",[2] and released that web in December.[3]

"The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." [4] If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.

The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.[17] Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the domain name system, or DNS. This IP address is necessary to contact the Web server. The browser then requests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical web page, the HTML text of the page is requested first and parsed immediately by the web browser, which then makes additional requests for images and any other files that form parts of the page. Statistics measuring a website's popularity are usually based either on the number of 'page views' or associated server 'hits' (file requests) that take place.

While receiving these files from the web server, browsers may progressively render the page onto the screen as specified by its HTML, CSS, and other web languages. Any images and other resources are incorporated to produce the on-screen web page that the user sees. Most web pages will themselves contain hyperlinks to other related pages and perhaps to downloads, source documents, definitions and other web resources. Such a collection of useful, related resources, interconnected via hypertext links, is

Page 25: Computers in management

what was dubbed a "web" of information. Making it available on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[2]

What does W3 define?

W3, or WWW, represents many concepts:

the idea of a boundless world of information interconnected by hypertext links for easy point-and-click access;

the Uniform Resource Identifier (URI) concept, an addressing system that the project implemented to make this world possible, despite many different protocols;

the Hypertext Transfer Protocol (HTTP), a network protocol used to transfer web pages;

the Hypertext Markup Language (HTML) a content formatting framework that every WWW client can understand, and is used for the formatting of text, menus and simple on-line help information across the net;

Embedded Systems:

An embedded system is a computer system designed to perform one or a few dedicated functions[1][2] often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. By contrast, a general-purpose computer, such as a personal computer, is designed to be flexible and to meet a wide range of end-user needs. Embedded systems control many devices in common use today.[3]

Embedded systems are controlled by one or more main processing cores that is typically either a microcontroller or a digital signal processor (DSP).[4] The key characteristic is however being dedicated to handle a particular task, which may require very powerful processors. For example, air traffic control systems may usefully be viewed as embedded, even though they involve mainframe computers and dedicated regional and national networks between airports and radar sites. (Each radar probably includes one or more embedded systems of its own.)

Since the embedded system is dedicated to specific tasks, design engineers can optimize it reducing the size and cost of the product and increasing the reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.

Physically, embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, or the systems controlling nuclear

Page 26: Computers in management

power plants. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.

In general, "embedded system" is not a strictly definable term, as most systems have some element of extensibility or programmability. For example, handheld computers share some elements with embedded systems such as the operating systems and microprocessors which power them, but they allow different applications to be loaded and peripherals to be connected. Moreover, even systems which don't expose programmability as a primary feature generally need to support software updates. On a continuum from "general purpose" to "embedded", large application systems will have subcomponents at most points even if the system as a whole is "designed to perform one or a few dedicated functions", and is thus appropriate to call "embedded".

Characteristics

Soekris net4801, an embedded system targeted at network applications.

1. Embedded systems are designed to do some specific task, rather than be a general-purpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.

2. Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is, of course, to play music.[5] Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.

3. The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or Flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard and/or screen.

URL:

Page 27: Computers in management

In computing, a Uniform Resource Locator (URL) is a subset of the Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it. In popular usage and in many technical documents and verbal discussions it is often incorrectly used as a synonym for URI,[1] the best-known example of which is the 'address' of a web page on the World Wide Web.

SyntaxMain article: URI scheme#Generic syntax

Every URL consists of some of the following: the scheme name (commonly called protocol), followed by a colon, then, depending on scheme, a hostname (alternatively, IP address), a port number, the path of the resource to be fetched or the program to be run, then, for programs such as Common Gateway Interface (CGI) scripts, a query string,[6][7] and with HTML documents, an anchor (optional) for where the page should start to be displayed.[8]

The combined syntax isresource_type://username:password@domain:port/path?query_string#anchor

The scheme name, or resource type, defines its namespace, purpose, and the syntax of the remaining part of the URL. Most Web-enabled programs will try to dereference a URL according to the semantics of its scheme and a context. For example, a Web browser will usually dereference the URL http://example.org:80 by performing an HTTP request to the host example.org, at the port number 80. Dereferencing the URN mailto:[email protected] will usually start an e-mail composer with the address [email protected] in the To field.

Absolute vs relative URLs

An absolute URL is one that completely specifies the desired resource starting from the root of the resource name space. It is unique, meaning that if two absolute URLs are identical, they point to the same resource.[9] An example is: http://en.wikipedia.org/wiki/File:Raster_to_Vector_Mechanical_Example.jpg

A relative URL points to the location of a resource relative to a base URL.[9][10] It is preceded by two dots (../directory_path/file.txt) for the directory above, one dot (./directory_path/file.txt) for the current directory or without the beginning slash (directory_path/file.txt) which is also the current directory. No dots (/directory_path/file.txt) for the root directory or domain. Which results to http://www.webreference.com/directory_path/file.txt.

URLs as locators

A URL is a URI that, "in addition to identifying a resource, provides a means of locating the resource

by describing its primary access mechanism (e.g., its network location

Page 28: Computers in management

Relational Databases:

A relational database matches data by using common characteristics found within the data set. The resulting groups of data are organized and are much easier for people to understand.

For example, a data set containing all the real-estate transactions in a town can be grouped by the year the transaction occurred; or it can be grouped by the sale price of the transaction; or it can be grouped by the buyer's last name; and so on.

Such a grouping uses the relational model (a technical term for this is schema). Hence, such a database is called a "relational database."

The software used to do this grouping is called a relational database management system. The term "relational database" often refers to this type of software.

Relational databases are currently the predominant choice in storing financial records, manufacturing and logistical information, personnel data and much more.

Relational term SQL equivalent

relation, base relvar table

derived relvar view, query result, result set

tuple row

attribute column

[edit] Relations or TablesMain articles: Relation (database) and Table (database)

A relation is defined as a set of tuples that have the same attributes. A tuple usually represents an object and information about that object. Objects are typically physical objects or concepts. A relation is usually described as a table, which is organized into rows and columns. All the data referenced by an attribute are in the same domain and conform to the same constraints.

Page 29: Computers in management

The relational model specifies that the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes. Applications access data by specifying queries,

which use operations such as select to identify tuples, project to identify attributes, and join to combine relations. Relations can be modified using the insert, delete, and update operators. New tuples can supply explicit values or be derived from a query. Similarly, queries identify tuples for updating or deleting. It is necessary for each tuple of a relation to be uniquely identifiable by some combination (one or more) of its attribute values. This combination is referred to as the primary key.

Mail Merge:

You use mail merge when you want to create a set of documents that are essentially the same but where each

document contains unique elements. For example, in a letter that announces a new product, your company logo and

the text about the product will appear in each letter, and the address and greeting line will be different in each letter.

Page 30: Computers in management

Using mail merge, you can create:

A set of labels or envelopes   The return address is the same on all the labels or envelopes, but the

destination address is unique on each one.

A set of form letters, e-mail messages, or faxes   The basic content is the same in all the letters, messages,

or faxes, but each contains information that is specific to the individual recipient, such as name, address, or

some other piece of personal data.

A set of numbered coupons   The coupons are identical except that each contains a unique number.

Creating each letter, message, fax, label, envelope, or coupon individually would take hours. That's where mail

merge comes in. Using mail merge, all you have to do is create one document that contains the information that is the

same in each version. Then you just add some placeholders for the information that is unique to each version. Word

takes care of the rest.

Start the mail merge process

To start the mail merge process:

1. Start Word.

A blank document opens by default. Leave it open. If you close it, the next step won't work.

2. On the Tools menu, point to Letters and Mailings, and then click Mail Merge.

 NOTE     In Word 2002, on the Tools menu, point to Letters and Mailings, and then click Mail Merge Wizard.

The Mail Merge task pane opens. By using hyperlinks in the task pane, you navigate through the mail-merge

process.

Slide Transition:

What is a Slide Transition?

Page 31: Computers in management

A slide transition is the visual motion when one slide changes to the next during a presentation. By default, one slide simply replaces the previous one on screen, much the same way that a slide show of photographs would change from one to the next. Most presentation software programs provide many different transition effects that you can use to liven up your slide show.

What are the Slide Transition Choices?

Transitions range from a simple Cover Down, where the next slide covers the current one from the top of the screen, to a Wheel Clockwise where the new slide spins in like spokes on a wheel to cover the previous one. You can also have slides dissolve into each other, push each other off the screen, or open up like horizontal or vertical blinds.

Common Mistakes When Using Slide Transitions

While all this choice may seem like a great thing, common mistakes made are to use too many transitions, or to use one that doesn’t fit well with the subject matter. In most cases, find one transition that doesn’t detract from the presentation and use it throughout the show.

Add a Different Slide Transition to Slides Needing Special Emphasis

If there is a slide that needs special emphasis, you might consider using a separate transition for it, but don’t choose a separate transition for each slide. Your slide show will look amateurish and your audience will quite likely be distracted from the presentation itself, as they wait and watch for the next transition.

Slide Transitions are Finishing Touches

Slide Transitions are one of the many finishing touches to a presentation. Wait until you have the

slides edited and arranged in the preferred order before setting animations.

Page 32: Computers in management

7) Primary and Secondary Storage Devices :

Computer data storage, often called storage or memory, refers to computer components, devices, and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention.

Hierarchy of storage

Various forms of storage, divided according to their distance from the central processing unit. The fundamental components of a general-purpose computer are arithmetic and logic unit, control circuitry, storage space, and input/output devices. Technology and capacity as in common home computers around 2005.

[edit] Primary storage

Direct links to this section: Primary storage, Main memory, Internal Memory.

Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.

Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.

This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).

As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:

Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.

Page 33: Computers in management

Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.

Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.

As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).

Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.

Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.[1]

[edit] Secondary storage

A hard disk drive with protective cover removed.

Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and

Page 34: Computers in management

transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also an order of magnitude less expensive than primary storage. Consequently, modern computer systems typically have an order of magnitude more secondary storage than primary storage and data is kept for a longer time there.

In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.

When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[2]

Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.

The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.

Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.

Difference:

PRIMARY STORAGE DEVICES SECONDARY STORAGE DEVICES

1.These devices are tempo- 1.These devices are permanent.

Page 35: Computers in management

rary. 2.These devices are expensive. 2.These are cheaper.3.These devices are faster as, 3.These devices Computers via Connected directly cables,and slow, therefore cheaper.4.These devices have less 4.These devices have high =storage capacity. storage capacity.5.These devices refer to RAM. 5.These devices refer to FDD6.Refers to as Volatile memory 6. Non Volatile memoryAs Storage capacity is limited as storage capacity is unlimited

Q) What do you understand by the term Operating System?

Ans) In computing, an operating system (OS) is an interface between hardware and user, which is responsible for the management and coordination of activities and the sharing of the resources of a computer, that acts as a host for computing applications run on the machine. One of the purposes of an operating system is to handle the resource allocation and access protection of the hardware. This relieves the application programmers from having to manage these details.

Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface like typing commands by using command line interface (CLI) or using a graphical user interface. For hand-held and desktop computers, the user interface is generally considered part of the operating system.

Services provided by an operating system

b. Program execution. The operating system loads the contents (or sections) of a file into memory and begins its execution. A userlevel program could not be trusted to properly allocate CPU time.

c. I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a very low level. The user need only specify the device and the operation to perform on it, while the system converts that request into device- or controller-specific commands. User-level programs cannot be trusted to access only devices they should have access to and to access them only when they are otherwise unused.

d. File-system manipulation. There are many details in file creation, deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked. Deleting a file requires removing the name file information and freeing the allocated blocks. Protections must also be checked to assure proper file

Page 36: Computers in management

access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free blocks and deallocate blocks on file deletion.

e. Communications. Message passing between systems requires messages to be turned into packets of information, sent to the network controller, transmitted across a communications medium, and reassembled by the destination system. Packet ordering and data correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets destined for other processes.

f. Error detection. Error detection occurs at both the hardware and software levels. At the hardware level, all data transfers must be inspected to ensure that data have not been corrupted in transit. All data on media must be checked to be sure they have not changed since they were written to the media. At the software level, media must be hecked for data consistency; for instance, whether the number of allocated and unallocated blocks of storage match the total number on the device. There, errors are frequently process-independent (for instance, the corruption of data on a disk), so there must be a global program (the operating system) that handles all types of errors. Also, by having errors processed by the operating system, processes need not contain code to catch and correct all the errors possible on a system.

Comparison between two operating systems: Windows and Linux

Comparisons between the Microsoft Windows and Linux computer operating systems are a common topic of discussion among their users. Currently, Windows is the dominant proprietary operating system for personal desktop use (in terms of desktop installations), while Linux is the most prominent free software operating system. Both operating systems not only compete for user base in the personal computer market but are also rivals in the server and embedded systems markets.

User interface

Windows Linux Notes

Graphical user

interface

The Windows Shell. The window manager is the Desktop Window Manager on Windows Vista, and a Stacking window manager built on top of GDI in older versions. The desktop environment may be modified by a variety of third party products such as WindowBlinds; or completely replaced, for example by Blackbox for Windows, or

The KDE Plasma Desktop

A number of desktop environments are available, of which GNOME and KDE are the most widely used. By default, they use as their window managers Metacity and KWin respectively, though these can be replaced by other window managers such as

Page 37: Computers in management

LiteStep. With Windows Server 2008 and later server releases, there is also the option of running "Server Core" which lacks the standard window manager. [22]. The graphics drivers, subsystem, and core widgets are included with all installations, including those used as servers.

Compiz Fusion.

Other desktop environments and window managers include Xfce, LXDE, Enlightenment, Xmonad, Openbox, Fluxbox, etc. The X Window system runs in user-space and is optional.[23] Multiple X Window system instances can run at once, and it is a fully networked protocol. See Also: Comparison of X Window System desktop environments.

Command-line

interface

A sample Windows PowerShell session

The Command Prompt exists to provide direct communication between the user and the operating system. A .NET-based command line environment called Windows PowerShell has been developed. It varies from Unix/Linux shells in that, rather than using byte streams, the PowerShell pipeline is an object pipeline; that is, the data passed between cmdlets are fully typed objects. When data is piped as objects, the elements they encapsulate retain their structure and types across cmdlets, without the need for any serialization or explicit parsing of the stream. Cygwin or MS's own Services for Unix provides a bash terminal for Windows.[citation needed] Posix subsystem is built in but not enabled by default. The Console can execute up to 4 kinds of environments, MSDOS scripts under NT or via Command.com

A sample Bash session

Linux is strongly integrated with the system console. The command line can be used to recover the system if the graphics subsystem fails.[24][25] A large number of Unix shells exists; with the majority being "Bourne shell compatible" shells, of which the most widely used is GNU Bash. Alternatives include the feature-full Z shell; as well as shells based on the syntax of other programming languages, such as the C shell, and Perl Shell. Many applications can be scripted through the system console,[26] there are a lot of small and specialized utilities meant to work together and to integrate with other programs. This is called the toolbox principle.

A command-line interface, typically displayed in a system console or terminal emulator window, allows users to tell the computer to perform tasks ranging from the simple (for example, copying a file) to the complex (compiling and installing new software). Shells are powerful but can be confusing to new users. Some complex tasks are more easily accomplished through shells than through a GUI, such as piping, or scripting. See also: Comparison of

Page 38: Computers in management

runnung on NTVDM, NT shell scripts and OS/2 Console Scripts. Windows Script Host is included in Windows 98 and newer versions.

computer shells.

Installation :

Ease of Install On Windows Server 2003 and prior, the installation is divided into two stages; the first, text-mode; the second, graphical.[27] On Windows Vista and newer, the installation is single stage and graphical.

Some older versions require third party drivers (for example, by using driver floppies disks or slipstreaming the drivers and creating a new installation CD) if using a large number of SATA or SATA2 drives or RAID arrays.[28]

Varies greatly by distribution. Most distributions intended for new or intermediate users provide simple graphical installers.

General purpose oriented distributions offer a live CD or GUI installer (SuSE, Debian, Pardus, Pclinuxos, Mandriva, Ubuntu, Fedora etc.), others offer a menu-driven installer (Vector Linux, Slackware, Debian) while others, targeting more specialized groups, require source to be copied and compiled (Gentoo). The system can also be built completely from scratch, directly from source code (Linux from Scratch).

[29][30][31]

Drivers The Windows installation media usually contains enough drivers to make the OS functional. To this end, "generic" drivers may be used to provide basic functionality. Drivers can later be upgraded from the manufacturer. Windows Update contains many updated drivers that can be installed after the base OS is

Linux kernels in most distributions include the majority of drivers available as modules, hardware is detected and drivers loaded at boot with usually little or no user interaction required. These drivers are generally written by someone working for the hardware manufacturer or

Page 39: Computers in management

in place. Drivers are almost always closed-source, maintained and published by the manufacturer of their respective devices. Recent version of 64-bit Windows force all drivers to be signed, giving Microsoft the sole ability to authorize drivers; this feature cannot be easily overridden by system administrators.[32][33]

by someone in the user community skilled in doing so; usually the drivers are included in the kernel (open-source), and therefore do not require additional media or any user interaction. A few hardware manufactures (Broadcom, Nvidia) have proprietary drivers which require manual installation.

Prior to introduction of DKMS, third party kernel modules had to be manually updated when the kernel was upgraded.

Installation via Live

Environments

May be installed through the Windows Preinstallation Environment or BartPE. However, only the former is endorsed by Microsoft. Only Microsoft-certified System Builders (OEM companies) are allowed to use the WinPE disk for installation, by license. End-users are not allowed to use the WinPE installation environment.

Almost all Linux distributions now have a live CD that may be used for testing, install or recovery.[34]

Pre-installed software

Some multimedia and home use software (IE, Media Player, Notepad, WordPad, Paint…) plus OEM bundled software. Windows Vista Includes IE7, Windows Mail, Windows Media Center, etc. depending on which edition is purchased. It does not include Office suites or advanced multimedia software. However, Microsoft has licensed decoders for a

All main distributions contain numerous programs: multimedia, graphics, internet, office suites, games, system utilities and alternative desktop environments. Some distributions specialise in education, games, or security. Most distributions give users the choice of which bundled programs to install, if any.

Microsoft's methods of bundling software were deemed illegal in the case United States v. Microsoft.[35]

Page 40: Computers in management

number of patented audio and video coding methods, including the mp3 audio format, and Windows is able to play a number of patented formats by default.

Not pre-installed software

A massive pool of both proprietary software (including shareware and freeware) and free software. Programs usually come with the required libraries and are normally installed easily. Most programs must be individually installed.

Uninstallation can be of varying difficulty depending on which of many installer methods were used, components and registry entries may be left behind. Windows has a built-in installer program, and software that is to be installed has an installer "wrapper" that interfaces with the Windows Installer to accomplish installation. Not all Windows software uses the install manager.

A massive pool of free software and some proprietary software covering a wide range of use. A Microsoft employee wrote in an internal report in 1998 that "Most of the primary apps that people require when they move to Linux are already available for free."[36] Using free Windows-compatibility layers like Wine, some Windows software can also be run, often to a lesser degree, on Linux. Third-party software is usually listed/integrated into a packaging system, which is built into the OS. Less popular programs, which are not in the distributions repositories, are often provided in a form (such as the DEB format or the RPM (Red Hat Package Manager) format) which can be installed easily by the package manager. If no precompiled package exists, programs can be more or less automatically built from the source code. Most software is installed non-interactively to a default configuration.

Linux distributions can not lawfully include MP3 or MPEG-4 file decoders in a majority of countries, as it would violate the Patent Cooperation Treaty. There is nothing preventing a user from installing these decoders, however the user assumes all liability for installing said pieces of software. Media players (such as Rhythmbox)) for free alternative audio/video formats are available in Linux, but these players are unable to decode patented formats, such as MP3, without installing additional plugin(s).[37] In particular with the MP3 file format, many companies claim patents relevant to the format. See Patent issues with MP3 for more information.

Partitioning Expanding NTFS partitions Most file systems support

Page 41: Computers in management

is possible without problems, and on Vista it is possible to shrink partitions as well. Dynamic Disks provide dynamic partitioning. Third party tools are available that have more features than the built-in partitioning tools.

resizing partitions without losing data. LVM provide dynamic partitioning. All Linux distributions have bundled partitioning software such as fdisk or gparted

File systems Natively supported: NTFS, FAT, ISO 9660, UDF, and others; 3rd-party drivers available for ext2, ext3, reiserfs, HFS, and others

Natively supported: ext2, ext3, ext4, ReiserFS, FAT, ISO 9660, UDF, NFS, NTFS (incomplete), JFS, XFS and others; many additional filesystems (most notably NTFS using NTFS-3g, and ZFS) are available using FUSE. Archives and FTP sites also can be mounted as filesystems.

Windows can read and write with Ext2 and Ext3 file systems with third-party drivers such as FS-driver or ext2fsd; and ReiserFS through rfstool and related programs.

Boot Loader May boot to multiple versions of Windows through the Windows Boot Manager in Windows Vista and newer; or the earlier boot loader NTLDR in Windows Server 2003 and prior. Graphical configuration tools are available for both, such as EasyBCD for the Windows Boot Manager and MSConfig for NTLDR, which can chain load multiple non-NT environments, including Linux, by referring to volume boot records from those environments saved on the Windows partition.[38]

May boot to multiple operating systems through numerous bootloaders such as LILO and GRUB. With these, it is possible to choose among multiple installed kernel images at boot time. Graphical configuration tools for GRUB are available including KGRUBEditor[39] (KDE) and GrubConf [40] (GNOME). GRUB can also accept arbitrary, one-time configurations at boot time via the GRUB prompt. GRUB and LILO also support booting to non-Unix operating systems via chain loading; for a Windows and Linux dual-boot system, it is often easiest to install Windows first and then

Page 42: Computers in management

Linux because Linux installers such as Ubuntu's installer will automatically detect and set up other operating systems for dual/multiple boot with Linux.[41]

Accessibility and usability

A study released in 2003 by Relevantive AG indicates that “The usability of Linux as a desktop system was judged to be nearly equal to that of Windows XP”.[43]

Windows Linux

User Focus Mostly consistent. Inconsistencies appear primarily through backports—software ported from newer operating systems to older ones. For example, software ported from Vista to XP must follow the Vista guidelines, those of the newer system (IE7 and Windows Media Player 11 are examples of this).[citation needed] However, Microsoft continually pushes for consistency between releases with guidelines for interface design. The latest are Windows Vista User Experience guidelines.[44] Their focus is on consistency and usability, but with increased concern for safety in new versions. Third-party applications may or may not follow these guidelines, may have

The quality of graphical design varies between desktop environments and distributions. The two biggest desktop environments (GNOME and KDE) have clearly defined interface guidelines, which tend to be followed consistently and clearly.[45][46] These provide consistency and a high grade of customizability in order to adapt to the needs of the user. Distributions such as Ubuntu, SuSE, Fedora or Mandriva take this one step further, combining well-functioning usability and safety. However, inconsistencies may appear, since GNOME-based programs, following different guidelines, look notably different from KDE programs. There are

Page 43: Computers in management

their own guidelines, or may not follow any rules for interface design.

other environments/window managers, usually targeting professionals or minimalist users, featuring some very powerful programs with rudimentary, minimalist graphical front-ends, focusing much more on performance, small size and safety. WindowMaker and the Fluxbox/Openbox/Blackbox environments are such examples. Some other environments fit between the two models, giving both power, eye candy and simplicity (Enlightenment/E17, Xfce). Some graphical environments are targeted to mouse users only (Fluxbox), others to keyboard users only (Ratpoison), others to either. Certain graphical environments are also designed to be as resource-conservative as possible, so as to run on older machines.

Consistency between versions

User interaction with software is usually consistent between versions, releases, and editions.

Consistency ranges from high to poor between distributions, versions, window managers/desktop environments, and programs. Software is generally highly user-customizable, and the user may keep the customizations between versions.

Consistency between

applications

All Microsoft software follows the same guidelines for GUI, although not all software developed for

Highly consistent within KDE and GNOME. However the vast amount of additional software

Though Windows' GDI and most widget toolkits in

Page 44: Computers in management

Windows by third parties follows these GUI guidelines. As stated above, backports tend to follow the guidelines from the newer operating system.

that comes with a distribution is sourced from elsewhere; it may not follow the same GUI guidelines or it may cause inconsistencies (e.g. different look and feel between programs built with different widget toolkits).

Linux allow for applications to be created with a custom look and feel, most applications on both platforms simply use the default look and feel. However, there are exceptions like FL Studio for Windows, and LMMS for Linux.

Customization By default, Windows only offers customization of size and color of the graphical elements, and it is typically not possible to change how the interface reacts to user input.

A few third-party programs allow more extensive customization, like WindowBlinds or LiteStep, but extreme changes are usually out of reach. It is not possible to customize applications that do not use the default look-and-feel beyond the options the specific application offers.

Linux offers several user interfaces to choose from. Different environments and window managers offer various levels of customizability, ranging from colors and size to user input, actions, and display.

Accessibility Both Windows and Linux offer accessibility options,[47] such as high contrast displays and larger text/icon size, text to speech and magnifiers.

Page 45: Computers in management

Stability

Windows Linux Notes

General stability

Windows operating systems based on the NT kernel (including all currently supported versions of desktop Windows) are technically much more stable than some older versions (including Windows 3.1 and 95/98). Installing unsigned or beta drivers can lead to decreased system stability (see below).

A Linux window manager, a key component of the X Window-based GUI system, can be highly stable or quite buggy,[citation needed] but the more common ones are stable. Mechanisms to terminate badly behaving applications exist at multiple levels, such as Ksysguard and the kill command. Because Linux can use a text based system if the graphics system fails,[24][25] the graphics system can be easily restarted following a crash without a whole system reboot.

Instability can be caused by poorly written programs, aside from intrinsic OS stability, as Linux's graphics system is decoupled from the kernel and the system. Linux's graphics system can usually be restarted without affecting non-graphical programs and services running under other shells, and without restart.[48]

Device driver stability

Device drivers are provided by Microsoft or written by the hardware manufacturer. Microsoft also runs a certification program, WHQL Testing, through which most drivers are digitally signed by Microsoft as compatible with the operating system, especially on 64-bit versions. This ensures a maximum level of stability.

Some vendors contribute to free drivers (Intel, HP, etc.) or provide proprietary drivers (Nvidia, ATI, etc.). Unlike Windows, however, kernel developers and hobbyists write many or most device drivers; in these drivers, any developer is potentially able to fix stability issues and other bugs. Kernel developers do not support the use of drivers that are not open-source, since only the

Crashes can be caused by hardware problems or poorly written device drivers. Both operating systems, utilizing aspects of monolithic kernel architecture, run drivers in the

Page 46: Computers in management

manufacturer can fix stability issues in closed-source drivers.[49]

same address space as the kernel, leading to crashes or hangs resulting from buggy device drivers.

Downtime Reboots are usually required after system and driver updates. Microsoft has its hotpatching[50] technology, designed to reduce downtimes.

Linux itself needs to restart only for kernel updates.[51] However, a special utility can be used to load the new kernel and execute it without a hardware reset (kexec) and hence can stay up for years without a single hardware reboot, reducing downtime. For minor updates such as security fixes, Ksplice allows the linux kernel to be patched without a reboot. System libraries, services and applications can mostly be upgraded without restarting running software (old instances use the "replaced" versions)

Recovery In modern, NT-based versions of Windows, programs that crash may be forcibly ended through the task manager by pressing CTRL+SHIFT+ESC or CTRL+ALT+DEL.

Should this fail, other third-party applications can also be used. However, if a badly behaving application hangs the entire GUI, it is difficult or impossible to recover without restarting the entire computer,

All processes except for init and processes in D or Z state may be terminated from the command line. If the GUI hangs, on most distributions, CTRL+ALT+F1 takes the user to the terminal, where the process can be killed, and the GUI restored. Applications can also be closed via the GUI. The optional SysRQ allows low-level system manipulation and crash recovery. The entire graphical subsystem can be restarted without the need for a whole

Page 47: Computers in management

since there is no text-based management console independent of the GUI to resort to.

system shutdown. Reboots are seldom required.[52][53]

Additionally, Live CDs of Linux, if equipped with the correct tools, can work to repair a broken OS if the hard drive is mountable.[54]

Unrecoverable errors

If the kernel or a driver running in kernel mode encounters an error under circumstances whereby Windows cannot continue to operate safely, a "bug check" (colloquially known as a "stop error" or "Blue Screen of Death") is thrown. A memory dump is created and, depending on the configuration, the computer may then automatically restart. Additionally, automatic restart can be applied to services.

The Unix equivalent of the Windows blue screen is known as a kernel panic. The kernel routines that handle panics are usually designed to output an error message to the console, create a memory dump, and then either halt the system or restart automatically.

Performance

Windows LinuxNote

s

Process Scheduling

NT-based versions of Windows use a CPU scheduler based on a multilevel feedback queue, with 32 priority levels defined. The kernel may change the priority level of a thread depending on its I/O and CPU usage and whether it is interactive (i.e. accepts and responds to input from humans), raising the priority of interactive and I/O bounded processes and lowering that of CPU bound processes, to increase the responsiveness of interactive

Linux kernel 2.6 once used a scheduling algorithm favoring interactive processes. Here "interactive" is defined as a process that has short bursts of CPU usage rather than long ones. It is said that a process without root privilege can take advantage of this to monopolize the CPU,[57] when the CPU time accounting precision is low. However, Completely Fair Scheduler, now the standard scheduler, addresses this problem.

Page 48: Computers in management

applications.[55]

The scheduler was modified in Windows Vista to use the cycle counter register of modern processors to keep track of exactly how many CPU cycles a thread has executed, rather than just using an interval-timer interrupt routine.[56]

Memory Management/ Disk Paging

Windows NT family (including 2000, XP, Vista, Win7) most commonly employs a dynamically allocated pagefile for memory management. A pagefile is allocated on disk, for less frequently accessed objects in memory, leaving more RAM available to actively used objects. This scheme suffers from slow-downs due to disk fragmentation (if a variable size paging file is specified), which hampers the speed at which the objects can be brought back into memory when they are needed. Windows XP and later can defragment the pagefile, and on NTFS filesystems, intelligently allocate blocks to avoid this problem. Windows can be configured to place the pagefile on a separate disk or partition.[58] However, this is not default behavior, because if the pagefile is on a separate partition, then Windows cannot create a memory dump in the event of a Stop Error. On the NT family,

Most hard drive installations of Linux utilize a "swap partition", where the disk space allocated for paging is separate from general data, and is used strictly for paging operations. This reduces slowdown due to disk fragmentation from general use. As with Windows, for best performance the swap partition should be placed on a hard drive separate from the primary one. Linux also allows to adjust "swappiness" e.g. the amount of data it needs to buffer (this is not equivalent to adjusting the virtual memory size). Windows does not support such features.

The ideal solution performance-wise is to have the pagefile on its own hard drive, which eliminates both fragmentation and I/O issues.

Page 49: Computers in management

executed programs become part of the paging system (to improve performance). Programs cannot normally access each others address space. It is possible to configure the OS to have no additional paging file.[clarification

needed]

The Windows 3.1/95/98/ME family does not have true virtual memory and uses a simpler swapping scheme easily leading to needless swaps and disc fragmentation. Programs on this family can access each other's address space. [59]

Q) What are applications? How are they diferent from system software?

Ans) Application software is computer software designed to help the user to perform a singular or multiple related specific tasks. Such programs are also called software applications, applications or apps. Typical examples are word processors, spreadsheets, media players and database applications.

Terminology

In [computer science], an application is a computer program designed to help people perform a certain type of work. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements.

The Application Software is a software that APPLIES to the real life application. For example: Microsoft Word is used to create documents similar to you create manually on paper. Accounting Softwares those are used for accounting which is simallarly done manually on Account Book.

Thus, Application Software are easier to be understood by normal users because they directly deals with the normal real life applications.

Page 50: Computers in management

But the System Softwares are the softwares which deals with the system or hardware. For example: Disk Management Tools which are used to partition or format the disk.

System Softwares are recommended to be used only by Advanced users who are having advance knowledge of the computer system.

In easier language, your car have some controls like stearing, accelerator, break etc. To drive a car you have to know about only these controlls. But the car is not directly controlled by breaks, accelerator and stearing. These all controls are connected with engine and other machines. And these machines controls the car. Working of these machines are not known by all the persons who drive the car. It is known by mechanics.

We can say that stearing, break, accelerator are Application and engine and machines are the systems.

Q) Discuss various phases of development of modern computing. How has this affected the way business is conducted?Ans)

1. Introduction

For the past couple of years, the Chief Minister of Andhra Pradesh has used Information Technology (IT) as a "mantra" to get his state to march forward. He has drawn considerable attention in India as well as the rest of the world and is today often referred to as the CEO of Andhra Pradesh. The new BJP-led government at Delhi, recognising the role that IT is playing in the world today in enabling countries to strengthen their technological leadership, and the considerable contributions that Indian professionals are making in this arena both in India and abroad, set up a high-powered IT task force, to break India’s shackles and make her "a Global IT Superpower and a front-runner in the age of Information Revolution".

The task force has moved fast and in a considerably short time have come up with two reports. The first report, IT Action Plan, consisting of 108 recommendations suggests "revisions and additions to the existing policy and procedures for removing bottlenecks and achieving a pre-eminent status for India". The second report, referred to as a Basic Background Report on IT Hardware Development, Production and Export calls for a paradigm shift in the IT hardware industry so that it "can survive the future shock of fast changing prices, technological obsolescence and an ever-expanding horizon of highly innovative industry", creation of a proper investment climate, and streamlining of procedures for minimising uncertainty, grey market, avoidable licensing and purposeless inspection. The aim is to make India the "number one provider of IT products" in the world.

Page 51: Computers in management

Do these IT reports constitute a first step in making India a global IT superpower? Can IT be used in India to transform the country by providing a new vision to the youth of the country, enabling them to stand up, no longer feeling inferior to others in the world? This article addresses such questions in the context of the IT task force reports. First, some salient features of the report are highlighted. This is followed by a brief analysis of the IT infrastructure today along with the efforts made by different groups and agencies over the last 10–15 years in overcoming the bottlenecks. Analysing our strengths, the article concludes by pointing to the directions that need to be taken to achieve our dream of transforming India.

2.0 The IT Task Force

2.1 The IT Action Plan Report

In spite of efforts by the Department of Telecommunications and others over the last fifteen years, the info-infrastructure in India is considered by most to be grossly inadequate. The hold that DoT, MTNL, and VSNL have over this sector is considered by many, especially in the Indian corporate sector, to be a serious bottleneck. There is a growing demand to open up, let in finance and technology, and where necessary, management too, in order to alleviate the present situation. The task force recommendation on info-infrastructure aims to address this feeling and calls for liberalisation, especially in the area of data communications. It calls for removal of restrictions, declaring that Internet Service Providers (ISPs) be allowed to operate with zero license fee, the monopoly of VSNL on International Gateway for Internet be withdrawn, long-distance backbone monopoly of DoT be removed allowing Railways, Defence, State Electricity Boards and others to host fibre-optic backbones, last-mile access be freely permitted and radio frequency band in the range of 2.4 GHz to 2.483 GHz be opened for public wireless. The report also calls for removal of restrictions on electronic commerce.

The task force report calls for an yearly export target in IT software and services of $50 billion (Rupees two hundred thousand crore) by 2008. In the 39 recommendations (out of total report of 108 recommendations) under the title IT for all by 2008, it calls for rationalisation of duty structure, the Companies Act, and financial regulations to enable this to happen.

The report has a section on operation knowledge, consisting of 29 recommendations. Recognising that IT is a "frontier area of knowledge and a critical tool for assimilating, processing and productivising all other spheres of knowledge", it calls for a national campaign for universal computer literacy. It talks about schemes that will help students, teachers and schools purchase computers and promises to have computers and Internet in every school by 2003. The report talks about strengthening IT programs in various universities and about starting SMART schools in each State. According to the report, the Government intends to promote IT in rural India, use of Indian languages for computers, and indigenous technologies like corDECT.

The last section of the IT report calls for bringing IT into Government, spending 1–3% of the budget of each ministry and department for IT applications and has called for National Informatics Centres and others to establish framework contracts, which could be used by Government agencies to obtain IT consultancy, products and services at lower cost. The report

Page 52: Computers in management

ends with a call for amendment of the Indian Telegraph Act and drafting of cyber laws at the earliest.

2.2 Report on IT Hardware Development Production and Export

This report starts with a question "given the same degree of incentives and simplification of procedures bestowed on the software industry, is there a feasible policy regime which can give similar buoyancy to the Indian IT hardware industry, in spite of the capital intensiveness of the industry as a whole without conflicting with the growth of the software and IT service industry?" The report states that the protection to hardware industry in the past (in the form of high import duties) resulted in high cost of PCs and other IT products, which adversely affected the growth of IT in India. This resulted in smaller volume of production which made the industry unviable. The IT hardware industry thereby got transformed primarily into direct or indirect dealerships of foreign brands. The report points out that with government advancing zero import duty date to Jan. 2002 from Jan. 2005 (proposed by WTO), only those hardware companies which carry out higher value addition are likely to survive.

The report notes that making "IT hardware manufacture viable is a major challenge", reconciling the need for import for software development, the need of the hardware industry for rational duties such that duty on inputs is always less than that on finished goods, and the need of the government to increase revenue.

It introduces the concept of soft-banding IT unit which would give the IT hardware industry, serving both Indian and export market, the same facilities as are given today for export-oriented units: Such a facility is likely to put Indian units on a level playing ground with competing foreign manufacturers. According to the report, linking duty for products in these units to the value addition carried out makes more economic sense than the alternative of having to sacrifice India’s entire hardware industry to imports.

The report also suggests various fiscal measures such a giving deemed export status to all telecom products manufactured in the country and 90% depreciation in three years in view of the fast obsolescence of IT products. It calls for income tax exemption for a period of 5 years for IT product ventures in soft-banding IT Units and 20% investment subsidy for new investments in excess of Rs.30 crores.

The report also calls for considerable procedural simplifications in customs and excise regulations, addresses a number of banking issues which would help hardware industry, and calls for high-tech habitats. The report recognises that India virtually imports all its components including Integrated Circuits, and calls for special action by the Government in changing this situation.

The task force report recognises that "Design is the name of the game in the IT world" and calls for India to become the "number one IT design centre" in the world, on the way to becoming the "number one provider of IT products" to the world. Noting that India's core competence is technically trained manpower, the report asserts that the above goals are achievable and calls for financial incentives for Research and Development.

Page 53: Computers in management

Q) Critically evaluate the Indian IT Industry and its growth. What do you think is responsible for such a growth of this industry? Specifically identify the major components of Indian IT industry and respective key players.

3.0 The Current Scenario

Export target for IT products and services of $50 billion a year, making India the number one IT design centre, and the number one provider of IT products, are indeed goals to cherish. Before commenting on what needs to be done to move towards such goals and to discuss whether the IT report points in the right direction, let us take a brief look at where we are today.

3.1 Growth of the Software Industry in India

The software industry in India gained recognition in the early eighties, as companies took up export of trained software manpower, especially to USA. Very soon, instead of just exporting persons, several companies started taking up software projects at customer sites, and sent their professionals to carry out this task. Starting with routine jobs, most companies graduated to more and more sophisticated tasks and India started getting recognised as having special talent for software development and management of software projects.

It was only in the early nineties, after the Indian software industry got sufficient recognition, that Indian companies were able to win contracts in a large way to carry out software projects off-shore (in India). From then on, projects have gotten more sophisticated and bigger. Today, even though the software tasks carried out by India for the West may amount to a small portion of the worldwide IT industry, Indian companies and professionals are regarded as amongst the best in the world.

However, having achieved considerable success, most front-running software companies are dissatisfied with their performance. They recognise that they have come up with very few

Page 54: Computers in management

products that they own. Although they may have made significant contributions to many products on the shelves, hardly any carry their brand names. They are eager to make and own products, but they have little experience in marketing products worldwide. The home market is still too small to allow these products a trial site as well as a little protection, before they could handle fierce competition.

Product ownership is imperative if the Indian software industry is to take a major leap forward. Certain parts of the IT task force report aim to address this need by proposing liberalisation. However, the task force report does not address how to enable Indian companies to handle market their products worldwide. Maybe this is best left to the innovativeness of the industry itself.

Another problem that Indian software houses face is the large-scale migration of software manpower. With the dollar continually appreciating vis-a-vis rupee, the continued large-scale shortage of software professionals in the West, and the large income difference for these professionals in India and the West, a large section of Indian software personnel stay in India only until they get trained and an opportunity to move abroad. Most software companies have unsuccessfully tried restrictions such as bonds to stem this outflow, but have slowly come to live with this phenomenon. Making the work here more challenging, providing better remuneration, and more recently awarding them a part of the company's stock, to give a sense of ownership, are what the companies are offering. The IT task force moves in the right direction by calling for liberalising of the stock ownership rules for employees. The task force is however silent on the larger issues such as the impact that the increasing earnings for professionals in the software sector, and the growing difference between incomes in this sector and other sectors of the Indian economy, will have on Indian polity and society.

3.2 IT in India

What is normally regarded as India’s greatest weakness — the large population — can also be a strength. While the growing population has created a lot of problems in the country, it also represents a large potential market. This potential can be converted into reality only if the products are affordable to a large section of its people, and this is indeed a difficult task since most people of the country can afford very little. This is one of the reasons why India is yet to be converted into a large internal market, in which Indian companies can learn and consolidate. Without this, it is difficult to compete in the world market.

After a new product is introduced in the West, it is continuously innovated upon to bring down the price till it is widely affordable. Beyond this, there is little motivation to further bring down the price. All innovations thereafter are geared to improve features while the price is kept constant. Unfortunately, this affordable price level in the West is affordable to only the top few percent of the population in a country like India. To make it affordable to a larger cross-section, innovations different from those pursued in the West are required. The price of the product has to be brought down to a third or a fourth (in the process changing the shape of the product itself) of its price in the West to make it affordable to even 20% of the Indian population. However, 20% of the Indian population is a large market, equal in size to the West, and can fuel unprecedented growth. This approach is daunting, since it requires us to take a few steps ahead of what is

Page 55: Computers in management

normally done in the West. This approach alone can make IT (or for that matter most other products or services) available to wide sections of people in the country. Without such steps, "IT for all" will remain a slogan used as a cover for policies to enrich the lives of a few.

It is not that IT has made no difference so far in India. Introduction of IT has made some noticeable differences — railway ticket bookings is probably the most visible example. Even small shops and offices are now installing computers with some home-grown software. There are many small software companies located in garages. They have served the Indian market, and have gradually grown. However, these companies have rarely anything in common with the large software export houses. The software houses, with the high salaries paid to its employees, have largely priced themselves out of the Indian market. Expenditure in dollar terms cannot be matched by income in rupees. This is one of the greatest dilemmas facing the Indian IT industry. Unfortunately, the IT task force has not addressed this issue.

3.3 Internet for All

It is Internet access which has transformed computers from mere computing machines to drivers of the information age. The IT task force, therefore, rightly calls for Internet access for all, recognising that access to Internet or lack of it will create tomorrow’s divide between haves and have-nots.

The problem is that widespread Internet access pre-supposes a widespread telecom network and access to telephones. It is generally not known that a telephone in India costs upwards of Rs.30,000 to install. Taking a mere 15% as yearly finance charges on investment, and 15% as yearly operation, maintenance and obsolescence charges, an operator requires a minimum revenue of 30% of Rs.30,000, or Rs.9,000, a year from each telephone to break even. This implies that our telephone bills needs to exceed Rs.9,000 per year. Now, who in India can afford this? Not more than 1–2% of its population. Even with cross-subsidy (a smaller number of people generating much higher revenues) not more than 3–4% of the people can afford telephones.

How do we talk about providing Internet for all without facing this basic issue? Any alternate access network for Internet alone is unlikely to bring down the cost, compared to a network which provides Internet connectivity along with voice telephony. As it is the access cost which dominate the cost of the telecom network today, a country like India can ill-afford two access networks reaching individual homes. Emerging technologies like wireless and cable modem are indeed welcome as they reduce the cost of access, but looking at these to provide an alternate access network separate from that providing voice telephony, is only going to increase overall costs. Current regulations are the only reason why an access network, other than a licensed telephone network, cannot be used for voice in addition to data access.

Why is the cost of installing a basic telephone in India as high as Rs.30,000? This is because this cost, in the West, amounts to an easily affordable $800. The West has little motivation to significantly bring down the cost any further. The emphasis, instead, is on adding features, while keeping the cost constant.

Page 56: Computers in management

It is here that scientists from countries like India have to wrest the initiative, and aim to reduce the per-line cost of telephone and Internet access to a much lower value, say Rs.10,000. At such levels, it would be immediately affordable to over 15% of population, and with cross-subsidy, to a much larger percentage. Further in such a situation, the market in India alone would be large enough to propel India forward as a world-leader, a goal put forward in both the IT reports. The task of reducing the cost to Rs.10,000 per line will not be easy, but then when have such changes been easy to accomplish!

Before proceeding to look at some recent efforts in this direction, we present a bird’s-eye view of the relevant events of the last ten to fifteen years.

3.4 Telecom Expansion over the Last Decade

The Department of Telecommunications (DoT) in India has made significant efforts over the last fifteen years towards expanding the telecom network, even though its efforts have been well short of rising expectations. One of the most significant initiatives of DoT has been in bringing the STD-PCO to every corner of the country, and to a lesser extent, its group PBX policy. As individually owned telephones are not widely available and affordable, PCOs become a means to reach a much wider section of people. This is correctly recognised by the IT task force as it plans to encourage expansion of STD PCOs into Internet kiosks. The step is in right direction, both in making Internet more affordable, as well as in creating stakes for small business to run it well. The Group PBX scheme of DoT had a similar aim, but the poor revenue-sharing arrangement, where the Group PBX operator gets less than 20% of revenue while putting up the full access network (which amounts to a large percentage of the cost of installing a telephone line), has not made it viable. It is hoped that DoT and the IT task force will review this revenue-sharing percentage soon, especially as the group PBX operator could also double as the local access provider for Internet.

The second major initiative of DoT has been in privatising telecom equipment manufacturing, whereby DoT no longer makes purchases solely from state-owned units. This has resulted in a large number of telecom manufacturing units coming up. While many of these private units have not done well, others have used in-house R&D and out-sourced R&D to reduce costs significantly. Low-cost PCO equipment, analog and digital pair gain systems, PDH multiplexers, point-to-point microwave links, and multi-access rural radio equipment, are some examples. One reason the local manufacturers are struggling today is because liberalisation has sometimes gone to the other extreme, providing protection to equipment importers over the local manufacturers. Another reason is because they are ill-prepared to cope with fast changing telecom technology where software and hardware combine to provide unique telecom and networking products. These companies are fast learning their lessons. It is likely that some of these manufacturers could become the nucleus for making India a leading provider of IT products in the world, as the IT task force report envisages.

The third major initiative of DoT has been to create MTNL as an independent organisation, and to move towards corporatising the DoT itself in the coming years. The creation of MTNL has made a difference to the telecom service in Bombay and Delhi, even though there is still much more ground to cover. Along the same lines, DoT took the initiative, though with extreme

Page 57: Computers in management

reluctance, to break its own monopoly and invite Basic Services Operators (BSO) to operate a parallel telecom service in each state. Though slow to take off, and plagued by several disadvantages, especially in terms of high connectivity charges and denial of rights to connect to each other, the BSOs are likely to play a very important role in wiring up the country. Surprisingly, the IT task force takes no note of the emerging BSOs, and does not suggest means to channel their energies towards rapidly expanding IT in India.

3.5 R&D initiatives in Telecom and IT

The major Indian initiative in R&D in the telecom sector has been, of course, C-DoT. This initiative delivered not just a product (C-DoT exchanges serve about a third of the 16 million lines in use in India today, which is no mean achievement), but created a belief amongst Indian engineers that they can design an exchange for India which would serve them better than available products.

This was followed by a number of smaller and dispersed efforts (not so well known or documented), to design all kinds of electronic, telecom and IT products. While the products like digital pair gain systems and multiplexers mentioned earlier are a part of such efforts, there have also been numerous efforts where companies have designed products for international companies. Today, one finds a company designing a GPS system for the Japanese car industry, another designing a DSP-based motor controller for a leading multinational refrigerator company, and yet others designing PC motherboards, laptop computers and computer networking products. The number of such companies are growing, and India is indeed emerging as a design house, something wished for by the IT hardware report.

Yet another area that deserves mention is the growth of VLSI design houses in the country. Even though the efforts are largely sponsored by multinationals, VLSI design expertise is increasingly available in the country and adds immensely to the vision of India as a design house.

An energetic R&D group that has emerged in recent years is the one lead by the Telecommunications & Computer Networks (TeNeT) Group at IIT Madras. Consisting of the TeNeT faculty and project staff at IIT Madras and several R&D companies formed by alumni of IIT Madras, the group’s mission is to make possible a hundred million telephone connections and twenty five million Internet connections in India in less than ten years. It recognises that to make this possible, telecom infrastructure costs have to be brought down drastically, much beyond where the West has stopped. It has worked out a unique institute-industry relationship to develop a range of telecom and computer networking products which compete today with the very best in the world. It has entered into strategic relationships with IC manufacturers abroad to come up with wireless access, fibre access and Internet access systems, specific to the needs of developing countries.

Such R&D efforts have resulted in the germination of seed which could enable India grow into one of the leading IT design houses of the world. But much is needed to nurture this plant. In a fiercely competitive industry where each design has to standup successfully in the market place, it is not governmental monetary support that is the most important. Venture capital will fill this space and the IT task force does talk about creation of such venture capital. What is needed is

Page 58: Computers in management

encouragement, and enabling of the locally designed product to have a good trial in domestic market. It is not tariff protection that is needed, but assistance that neutralises finance muscle, brand-name muscle and the "foreign-products-are-better" mindset ranged against most indigenously developed products. The IT task force has not sufficiently expressed itself on this matter.

3.6 IT Education

The goals that the IT task force has set for itself would require a very large pool of trained personnel. The task force seems to be seized of the matter as it aims to bring computers and Internet to every school and setup SMART schools and institutes in the area of Telecommunications and Computer Science.

The Indian Institutes of Technology and deemed institutes and universities have indeed provided good training in these areas. The persons graduating from these institutes are recognized as amongst the best in the world. The numbers are not too small (though they can be increased to an extent) and these people could provide the required technological leadership to the country if only they decided to stay in the country. Merely increasing the number trained in such institutions, would not necessarily increase the availability of such manpower in India.

Similarly a large number of private engineering institutions and computer training centres have come up which train a very large number of personnel in the country. The process is continuing, and little needs to be done today to enhance this pace. The problem is that such training is often limited in scope and these institutions churn out what can be described as technician level people. Even though such training is useful, what is needed is a large body of middle-level people with good knowledge and skills. There is little effort in this direction and it is left for the persons to train themselves on the job to reach the next level. This is largely inadequate.

If the targets set by the IT task force are to become a reality, one needs to concentrate on improving the middle-level colleges throughout the country. Good knowledge and skills; rather than a degree or certificate, have to be the hallmark. Hopefully, the IT institutions being setup in different states will do exactly this. Continuing education courses, open universities and Internet based education, will also hopefully focus on this.

The Outsourcing History of IndiaThe idea of outsourcing has its roots in the 'competitive advantage' theory propagated by Adam Smith

in his book 'The Wealth of Nations' which was published in the year 1776. Over the years, the meaning

of the term 'outsourcing' has undergone a sea-change. What started off as the shifting of

manufacturing goods to countries providing cheap labor during the Industrial Revolution, has taken on

a new connotation in today's scenario. In a world where information technology has become the

backbone of businesses worldwide, 'outsourcing' is the process through which one company hands

over part of its work to another company, making it responsible for the design and implementation of

certain business process under the requirements and specifications of the outsourcing company.

This outsourcing process is beneficial to both the outsourcing company and the outsourcing service

provider. In an outsourcing relationship, the outsourcing service provider enables the outsourcer to

reduce operating costs, increase quality in non core areas of business, save on effort and increase in

productivity Outsource2india (O2I), a pioneer in outsourcing since the year 1999, provides ten distinct

Page 59: Computers in management

services which cater to a wide range of industries. Outsource to O2I and get access to proficient

and cost-effective services.

Although the IT industry in India has existed since the early 1980s, it was the early and mid 1990s that

saw the emergence of outsourcing. One of the first outsourced services was medical transcription,

but outsourcing of business processes like data processing, medical billing, and customer

support began towards the end of the 1990s when MNCs established wholly owned subsidiaries which

catered to the offshoring requirements of their parent companies. Some of the earliest players in the

Indian outsourcing market were American Express, GE Capital and British Airways.

At Outsource2india, we provide call center services, data entry services, engineering services,

financial services, creative services, web analytics services, healthcare services, photo

editing services, software development, research and analysis services and a host of other

additional services. Outsource to Outsource2india and get access to competitive services

that can give your business a competitive edge

4.0 Where Does One Go from Here?

There is no doubt that the IT task force has started with the right intentions. It wants India to attain the leading position in the world in designing and supplying IT products and services. It wants IT to be available to all in the country, with the hope that this can become the vehicle to bring back a sense of national pride. There are enough indications that this is possible, though it is by no means an easy task.

However, the intention is only the first important step. The software export target set by the task force requires, in addition to the liberalisation and simplification of the procedures detailed in the report, that Indian companies own and sell software products. Means have to be found to retain some of our best-trained software personnel, and large scale training, focusing on good knowledge and skills, is a must. India becoming a design house for IT products is a new and powerful concept. Sufficient initial evidence is there that India is capable of pursuing this direction. But from concept to reality is indeed a long way and requires immense efforts. The third target, "India becoming a leading IT product manufacturer" however, seems at present to be a mere noble desire. Over the last ten years, India has moved in the opposite direction where it imports all its IT products. This is not to say that we cannot change course now. We find that China has achieved immensely in this area in a mere five to seven years. But there has to be a will backed by effort.

Finally, the task is not merely to free the corporate sector and some of those who can benefit from the removal of various regulations currently slowing them down (this is a welcome task in itself, but not sufficient to realise the goals set by IT task force). We need to analyse what needs to be done to have IT in the hands of hundreds of millions of people. A concrete program has to be made so that in some time frame (say 10 years), IT is looked upon by a large section of people as liberating, rather than as yet another technology that pushes them into the category of have-nots. It will not do if the country has to depend for ever on imported high-cost telecom infrastructure — the $50 billion software export goal should not be based on a $50 billion import. Unless various sections of our people from all walks of life, from the towns as well as the

Page 60: Computers in management

villages, participate in this effort, the IT industry will remain a small part of our relatively small economy.