8/6/2019 Imppppp Data
1/24
WEB APPLICATION TECHNOLOGIES
Most modern businesses software is web-enabled.This article describes how such
applications are
implemented using the five main technology stacks.
Types of Web Applications
There are three main types of web applications:
Customer-facing applications are known as ecommerce or B2C sites and use the internet.
These typically present a customer with choices of products or services to buy using a
shopping cart and payment method. Examples are
travelreservations, http://www.amazon.com andhttp://www.ebay.com.
Employee-facing applications use the intranet in a company. One example is a company's
accounting application. Another might be employee expense reporting. A third might be theERP (enterprise requirements planning) system. These applications previously operated on
an internal client-server network. They are now webenabled
to make them easier to use and deploy. Disparate applications, such as an ERP and CRM
(Customer Relationship Management) systems are now being integrated using XML and web
services.
Customer-Supplier facing applications are known as B2B (Business to Business) sites and
use the
extranet, (an extension of an intranet that allows outside companies to work in a password-
protected
space). B2B sites provide a secure means for sharing selected information. One example is
supply chain software that allows all suppliers to see demand and inventory in the supply
chain. Another example is procurement software that allows a customer to send RFQs and
receive quotes over the web.A third example is collaboration software that allows companies
to share product development and project management information.
Not all applications fit the above categories. For example Yahoo! email is not in any of the
above. However, the above categories are representative of the main types of applications.
The same technologies are used for these other applications.
The 3-tier architecture
Web applications are built using a 3-tier architecture in which the client, server and
database constitute the main elements. This is sometimes called an n-tier architecture
because there can be multiple levels of servers. This architecture is distinct from prior
mainframe (1-tier) and client-server (2-tier) architectures.
Technologies Used to Build Web Applications
Originally, the internet was designed to serve static pages. A rudimentary technology
8/6/2019 Imppppp Data
2/24
based on CGI was developed to allow information to be passed back to a web server. During
the last ten years, four main technologies have emerged to replace CGI and the basic CGI
technology has been further refined, using Perl as the primary programming language. This
has lead to 5 competing technology stacks that differ in
the following attributes:
Programming languages (Lang)
Operating system (OS). This can be Linux (L), Unix (U) or Windows (W).
Web server (Server)
Database support (DB)
Sponsoring companies (Sponsors)
The following table summarizes these technology stacks.
Stack Sponsor OS Server DB Lang
CGI Open source L/U Apache Varies Perl
ColdFusion Macromedia W/L/U ColdFusion Varies CFML
LAMP Open source L/W/U Apache MySQL PHPJava/J2EE Sun, IBM L/U J2EE Varies Java
.NET Microsoft W ASP.NET SQL server VBas
Note variations have been left off to show the main combinations.These technologies are
quite different, which means that someone who's familiar with one approach would have a
high learning curve to use a different
one. Once an application is developed using one technology, it is difficult and expensive to
convert it
to a different one. As a result, many web application Web Internet developers have a stronginterest in promoting the technology they are familiar with.
When choosing a technology for the web application, one typically chooses a whole stack.
Trying to use
one piece from one stack (e.g. Java/J2EE) and another from a second (e.g .NET) is
possible.
The following is a more detailed explanation of each of these technology stacks:
CGI/Perl. CGI is the granddaddy of interfaces for passing data from a submitted web
page to a web server. Perl is an open-source language optimized for writing server-side
applications. Together, CGIand Perl make it easy to connect to a variety of databases. Apache tends to be the web
server used because it runs on all major operating systems and is highly reliable. Other
open-source languages such as C and Python can also be used. For highend applications,
especially e-commerce sites like amazon.com this technology is used because it is so
powerful. However other technology stacks can be implemented more easily and quickly.
Macromedia sells a collection of products that make it easy to build small and medium-
8/6/2019 Imppppp Data
3/24
sized web applications. The primary tools provided by Macromedia are ColdFusion, which is
an engine that lets one program in CFML (Cold Fusion Markup Language) and
Dreamweaver, which is a
development tool for making web applications. Because Macromedia is a smaller player,
they have focused on trying to make their products compatible with components from other
technology stacks. Macromedia also sells Flash and has tools for using this in webapplications.
Java/J2EE is a robust, well-developed method for creating medium to large web
applications. It has support from a number of large industry players. Sun Microsystems
provides the Java. IBM (Websphere) and BEA Systems (Weblogic) are two major suppliers
of web application servers and associated software to make it easy to create and manage
these applications. There is a large body of Java programmers available to write the code.
This technology stack works with a variety of databases and is particularly well-tuned to
mainstream commercial databases like Oracle and DB2. IBM has developed a development
environment called Eclipse that is making it easier to write applications, but in general, Java
is associated with powerful applications built by capable programmers.
LAMP (Linux, Apache, MySql, PHP) is a relatively new technology stack for building web
applications that has been adopted for many small and medium-size web tasks because: (a)
the entire
technology stack is available through open-source; (b) it works well; (c) it is easy to learn;
(d) it allows one to build a web application quickly; and (e) there are many open source
code samples that can be bolted together to make a full solution. LAMP relies on CGI for
data exchange between the server and browser, but the CGI commands are hidden from
the developer. LAMP doesn't have all of the capabilities of J2EE, but it gains ground every
year. Sites that use PHP can be seen by the ".php"
as part of the page name in the URL. LAMP has become especially popular with ISVs(independent software vendors) because they can create an application and sell it without
having to pay for the underlying (open source) software.
Microsoft .NET. Microsoft is using their .NET strategy to take over the server market the
way in which Windows, Office, and Internet Explorer have taken over the desktop. The stack
comprises
a web server (ASP.NET) and two programming languages (VisualBasic.NET and C#.NET)
that compete against PHP and Java respectively. They also have a database (SQL Server).
Microsoft has
done an excellent job making their products easy to use so a business analyst can create a
web application without needing a programmer.
MS-DOS vs. Linux / Unix
In the below chart is a listing of common MS-DOS commands with their Linux / Unix
8/6/2019 Imppppp Data
4/24
counterpart.
MS-DOSLinux / Unix 1.attrib 1.chmod
2.backup 2. tar
3.dir 3. ls
4.cls 4.clear5.copy 5.cp
6.del 6. rm
7.deltree 7.rm -R ,rmdir
8.edit 8.vi pico
9.format 9.fdformat ,mount ,umount
10.move / rename 10. mv
11.type 11. less
12.cd 12. cd , chdir
13.md 13. mkdir
14.win 14. startx
/////////////////////////////////
Posted: Thu May 06, 2010 7:12 am Post subject: Introduction to PERL ProgrammingSubject description: Computer Science
INTRODUCTION
Perl and NetworkingPERL (Practical Extraction of Report Language) designed by Larry Wall.Why would you want to write networking applications in Perl?The Internet is based on Transmission Control Protocol/Internet Protocol (TCP/IP), and mostnetworking applications are based on a straightforward application programming interface
(API) to the protocol known as Berkeley sockets. The success of TCP/IP is due partly to theubiquity of the sockets API, which is available for all major languages including C, C++,Java, BASIC, Python, COBOL, Pascal, FORTRAN, and, of course, Perl. The sockets API issimilar in all these languages. There may be a lot of work involved porting a networkingapplication from one computer language to another, but porting the part that does thesocket communications is usually the least of your problems. Because of having belowfeatures ,PERL is a good for networking.
A Language Built for Interprocess CommunicationPerl was built from the ground up to make it easy to do interprocess communication (thething that happens when one program talks to another). As we shall see later in thischapter, in Perl there is very little difference between opening up a local file for reading andopening up a communications channel to read data from another local program. With only a
little more work, you can open up a socket to read data from a program running remotelyon another machine somewhere on the Internet. Once the communications channel is open,it matters little whether the thing at the other end is a file, a program running on the samemachine, or a program running on a remote machine. Perl's input/output functions work inthe same way for all three types of connections.A Language Built for Text ProcessingAnother Perl feature that makes it good fornetwork applications is its powerful integrated regular expression-matching and text-processing facilities. Much of the data on the Internet is text based (the Web, for instance),and a good portion of that is unpredictable, line-oriented data. Perl excels at manipulating
8/6/2019 Imppppp Data
5/24
this type of data, and is not vulnerable to the type of buffer overflow and memory overrunerrors that make networking applications difficult to write (and possibly insecure) inlanguages like C and C++.
An Open Source ProjectPerl is an Open Source project, one of the earliest. Examining other people's source code is
the best way to figure out how to do something. Not only is the source code for all of Perl'snetworking modules available, but the whole source tree for the interpreter itself is availablefor your perusal. Another benefit of Perl's openness is that the project is open to anydeveloper who wishes to contribute to the library modules or to the interpreter source code.This means that Perl adds features very rapidly, yet is stable and relatively bug free.The universe of third-party Perl modules is available via a distributed Web-based archivecalled CPAN, for Comprehensive Perl Archive Network. You can search CPAN for modules ofinterest, download and install them, and contribute your own modules to the archive.
Object-Oriented Networking ExtensionsPerl5 has object-oriented extensions, and although OO purists may express dismay over thefast and loose way in which Perl has implemented these features, it is inarguable that theOO syntax can dramatically increase the readability and maintainability of certain
applications. Nowhere is this more evident than in the library modules that provide a high-level interface to networking protocols. Among many others, the IO::Socket modulesprovide a clean and elegant interface to Berkeley sockets; Mail::Internet provides cross-platform access to Internet mail; LWP gives you everything you need to write Web clients;and the Net::FTP and Net::Telnet modules let you write interfaces to these importantprotocols.
SecuritySecurity is an important aspect of network application development, because by definition anetwork application allows a process running on a remote machine to affect its execution.Perl has some features that increase the security of network applications relative to otherlanguages. Because of its dynamic memory management, Perl avoids the buffer overflows
that lead to most of the security holes in C and other compiled languages. Of equalimportance, Perl implements a powerful "taint" check system that prevents tainted dataobtained from the network from being used in operations such as opening files for writingand executing system commands, which could be dangerous.
PerformanceA last issue is performance. As an interpreted language, Perl applications run several timesmore slowly than C and other compiled languages, and about par with Java and Python. Inmost networking applications, however, raw performance is not the issue; the I/Obottleneck is. On I/O-bound applications Perl runs just as fast (or as slowly) as a compiledprogram. In fact, it's possible for the performance of a Perl script to exceed that of acompiled program. Benchmarks of a simple If execution speed does become an issue, Perlprovides a facility for rewriting time-critical portions of your application in C, using the XS
extension system. Or you can treat Perl as a prototyping language, and implement the realapplication in C or C++ after you've worked out the architectural and protocol details.
Running PERL Programming
To run a Perl program from the Unix command line:perl progname.pl
Alternatively, put this as the first line of your script:
8/6/2019 Imppppp Data
6/24
8/6/2019 Imppppp Data
7/24
A variable is a container that holds one or more values that can change throughout aprogram.
There are 3 types of variables in perl:
1) Default variable
2) Scalar variable
3) Array
4) Associative array (hash/lookup table)
Default variable:This variable use positional parametersExample:- while (){Print $_ ;
}
Scalar
A scalar variable holds a single value, which can be a number or a character string.
Scalar variable have a dollar sign ($) prefix.
Examples of scalar variables:$name = ' priya ;$len = 5 ;
StringsStrings are sequences of characters (like hello).
Single-Quoted Strings
Text placed between a pair of single quotes is interpreted literally.
To get a single quote into a single-quoted string, precede it by a backslash (\).
To get a backslash into a single-quoted sting, precede the backslash by a backslash.
Examples of strings:hello # hello
can\t # canthttp:\\\\www # http:\\www
Double-Quoted Strings
The double quote interpolates variables between the pair of quotes, which means thatthe variable names within the string are replaced by their current values.
Examples
8/6/2019 Imppppp Data
8/24
$x = 1;
print $x ; # will print out $x
print $x ; # will print out 1
There are several different escape characters that can be printed out:
\n Newline \\ Backslash
\t Tab \ Double quote
Auto increment and Auto decrement Operators (++, --)
$a++; # Equivalent to $a = $a + 1;
$a--; # Equivalent to $a = $a 1;
Numeric and String Comparison Operators
Comparison Numeric String
Equal == eq
Not Equal != ne
Less Than < lt
Greater Than > gt
Less than or equal to = ge
Example :- Write a PERL script to find factorial of a number.
[student@localhost~]$vi factorial.pl
# Factorial of a numberprintEnter number\n;$a=;for($i=$a-1,$f=$a;$i>1;$i--){$f=$f*$i;}print factorial is $f\n;
8/6/2019 Imppppp Data
9/24
Output -> Enter number5Factorial is 120
Smart Dust
What is smart dust?
Smart dust is a hypothetical wireless
network of tiny micro electromechanical
Sensors that can detect light, temp, vibrations.
The smart dust concept was introduced by Kristofer S.J. Pister in 2001.
Smart dust devices are based on sub voltage and deep sub voltage nanoelectronics and
include the micro power sources with all solid states impulse super capacitors.
Components of smart dust
A single smart dust mote has:
1. A semiconductor laser diode and mems beam steering mirror from active optical
transmission.
2. mems corner cube retro reflector for passive optical transmission.
3. An optical receiver.
4. A single processing and control circuitry.
5. A power source based on thick film batteries and solar cells.
6. Photodetector and receiver
ConstructionA significant trend in electronics technology is the increasing ability to provide adaptive
features into smaller and smaller electronic devices. An example of this technology trend is
electronic motes. Electronic motes are devices that can:
Support the collection and integration of data form a variety of miniature sensors.
Analyze the sensor data as specified by system level controls.
Wirelessly communicate the results of their analyzes to other motes, system base stations
and the internet as specified by system automation
system automation.
Motes are also sometimes referred to as smart dust. One mote is composed of a small, low
powered and cheap computer connected to several sensors and a radio transmitter capableof forming ad hoc networks. The computer monitors the different sensors in a mote. These
sensors can measure light, acceleration, position, stress, pressure, humidity, sound and
vibration. Data gathered are passed on to the radio link for transmission from mote to mote
until data reaches the transmission node.
Working
smart dust motes are run by microcontrollers
8/6/2019 Imppppp Data
10/24
microcontrollers consists of tiny sensors for recording various type of data
sensors are run by timers
timers work for specific period by powering up the sensors to collect data
data obtained are stored in its memory for further interpretations or are send to the base
controlling stations
ccr comprises of three mutually perpendicular mirrors of gold coated poly silicon it has the property that any incident ray of light is reflected back to the source provided
that is incident within a certain range of angles centered about the cubes body diagonal
the micro fabricated ccr includes an electrostatic actuator that can deflect one of the
mirrors at kilohertz rate
thus the external light source can be transmitted back in the form of modulated signal at
kilobits per second
ccr based optical lens require an uninterrupted line of sight path
ccr can transmit to the bts only when the ccr body diagonal happens to point directly
towards the bts, within a few tens of degrees
a passive transmitter can be made more omni directional by employing several ccrs
oriented in different directions, at the expense of increased dust mote size.
Applications
Environmental protection (identification and monitoring of pollution).
Habitat monitoring (observing the behavior of the animals in there natural habitat).
Military application (monitoring activities in inaccessible areas, accompany soldiers and
alert them to any poisons or dangerous biological substances in the air).
Indoor/Outdoor Environmental Monitoring.
Security and Tracking
Health and Wellness Monitoring (enter human bodies and check for physiological
problems). Factory and Process Automation.
Seismic and Structural Monitoring.
Monitor traffic and redirecting it.
A typical application scenario is scattering a hundred of these sensors around a building or
around a hospital to monitor temperature or humidity, track patient movements, or inform
of disasters, such as earthquakes.
In the military, they can perform as a remote sensor chip to track enemy movements,
detect poisonous gas or radioactivity.
Advantages of smart dust
Main benefits for organizations:
Dramatically reduce systems and infrastructure cost
Increase plant/factory/office productivity
Improve safety, efficiency and compliance
Smart Dust has the most useful applications, such as:
8/6/2019 Imppppp Data
11/24
detecting corrosion in aging pipes before they leak and cost huge amounts of money to
repair
automating many manual error-prone tasks which involve calibration and monitoring
providing accurate data of motor health in order to perform more timely maintenance
when needed
having most instruments become wireless and even be able to recalibrate, reconfigure andupgrade them wirelessly
Monitoring power consumption to better understand where most energy is being used
which would help plants save money in the long run.
In an office environment, Smart Dust could also prove invaluable, and have the potential to
be used in such applications as:
eliminating wired routers entirely, and replacing them with a single Smart Dust chip which
would handle all hardware and software functions for distributed networks, using five times
less power than conventional networks
tracking the movements of visitors as they roam around the office to see if they are goinginto any restricted locations, or to let the CEO know what they are up to
Tracking important packages leaving the offices (Smart Dust nodes can even be equipped
with GPS receivers!)
The uses of Smart Dust are so numerous and wide-ranging that it would take pages and
pages to mention them all. It has much potential in the fields of medicine, agriculture,
dentistry, etc. and we've only begun to tap into that potential
///////////////////////////////////////////////////////////
GRID COMPUTING
Grid Computing enables virtual organizations to share geographically distributed resourcesas they pursue common goals, assuming the absence of central location, central control,
omniscience, and an existing trust relationship.
HISTORY
Term originated in 1990 for making computer power easy to access.
IAN FOSTER,CARL KESSELMAN & STEVE TUECKE, regarded as fathers of grid.
OVERVIEW
Form of distributed computing.
Composed of many networked loosely coupled computers acting together to perform large
tasks.
Uses middleware to divide & apportion pieces of program among several computers.
Special type of parallel computing that relies on complete computers connected to a
network by a conventional computer computer interface such as ethernet.
8/6/2019 Imppppp Data
12/24
HOW GRID WORKS?
Each computers resources are shared with every other computer in network.
Processing power, memory and data storage are all community resources that authorizedusers can tap into and leverage for specific tasks.
This sharing turns a computer network into a powerful supercomputer.
APPLICATIONS
In drug discovery
Economic forecasting
Seismic analysis
Back office data processing in support for e-commerce & web-services
Biomedical applications
ADVANTAGES
No need to buy large six figure SMP servers for applications that can be split up and farmed
out to smaller commodity type servers.
Grid environments are much more modular and don't have single points of failure. If one of
the servers/desktops within the grid fail there are plenty of other resources able to pick to
load.
Jobs can be executed in parallel speeding performance.
Grid environments are extremely well suited to run jobs that can be split into smaller
chunks and run concurrently on many nodes.
DISADVANTAGES
You may need to have a fast interconnect between compute resources (gigabit ethernet at a
minimum).
Grid environments include many smaller servers across various administrative domains.
Good tools for managing change and keeping configurations in sync with each other can bechallenging in large environments.
GRID EXAMPLES
8/6/2019 Imppppp Data
13/24
Network for Earthquake Engineering and Simulation (NEESGrid)
Biomedical Informatics Research Network (BIRN)
++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction to Shell Programming
We use shells because shell is a simple way to string together a bunch of UNIX commands
for execution at any time without the need for prior compilation. Also because its generally
fast to get a script going. Not forgetting the ease with which other scripters can read the
code and understand what is happening. Lastly, they are generally completely portable
across the whole UNIX world, as long as they have been written to a common standard.
The Shell History: The basic shells come in three main language forms. These are (in
order of creation) sh, csh and ksh. Be aware that there are several dialects of these script
languages which tend to make them all slightly platform specific. The different dialects aredue, in the main, to the different UNIX flavors in use on some platforms. All script
languages though have at their heart a common core which if used correctly will guarantee
portability.
Bourne Shell: Historically the sh language was the first to be created and goes under the
name of The Bourne Shell. It has a very compact syntax which makes it obtuse for novice
users but very efficient when used by experts. It also contains some powerful constructs
built in. On UNIX systems, most of the scripts used to start and configure the operating
system are written in the Bourne shell. It has been around for so long that is it virtually bug
free.
C Shell: Next up was The C Shell (csh), so called because of the similar syntactical
structures to the C language. The UNIX man pages contain almost twice as much
information for the C Shell as the pages for the Bourne shell, leading most users to believe
that it is twice as good. This is a shame because there are several compromises within the C
Shell which makes using the language for serious work difficult (check the list of bugs at the
end of the man pages!). True, there are so many functions available within the C Shell that
if one should fail another could be found. The point is do you really want to spend your time
finding all the alternative ways of doing the same thing just to keep yourself out of trouble.
The real reason why the C Shell is so popular is that it is usually selected as the default
login shell for most users. The features that guarantee its continued use in this arena are
aliases, and history lists.
Korne Shell: Lastly we come to The Korne Shell (ksh) made famous by IBM's AIX flavor of
UNIX. The Korne shell can be thought of as a superset of the Bourne shell as it contains the
whole of the Bourne shell world within its own syntax rules. The extensions over and above
the Bourne shell exceed even the level of functionality available within the C Shell (but
without any of the compromises!), making it the obvious language of choice for real
scripters. However, because not all platforms are yet supporting the Korne shell it is not
8/6/2019 Imppppp Data
14/24
fully portable as a scripting language at the time of writing. This may change however by
the time this book is published. Korne Shell does contain aliases and history lists aplenty
but C Shell users are often put off by its dissimilar syntax. Persevere, it will pay off
eventually. Any sh syntax element will work in the ksh without change.
eg1 Write a shell script to add two numbers.
echo Enter First value :
read a
echo Enter Second value :
read b
echo Addition : `expr $a + $b`
2 Write a shell script to perform the arithmetic operations.
echo Enter First value :
read aecho Enter Second value :
read b
echo Addition : `expr $a + $b`
echo Subtraction : `expr $a - $b`
echo Multiplication : `expr $a \* $b`
echo Division : `expr $a / $b`
echo Modulus : `expr $a % $b`
3 Write a shell script to check whether the given number is odd or even.
echo Enter the number to be checked : read n
if [ `expr $n % 2` -eq 0 ]
then
echo Number is EVEN
else
echo Number is ODD
4 Write a shell script for grade calculation.
echo Enter Marks of Subject1 :
read s1echo Enter Marks of Subject2 :
read s2
echo Enter Marks of Subject3 :
read s3
total=`expr $s1 + $s2 + S3`
per=`expr $total \ 3`
if [ $per gt 75 ]
8/6/2019 Imppppp Data
15/24
then
echo Result : Honours
elif [ $per gt 60 ]
then
echo Result : First Division
elif [ $per gt 50 ]then
echo Result : Second Division
elif [ $per gt 36 ]
then
echo Result : PASS only
else
echo Result : FAIL
fi
5 Write script to print number as 5, 4, 3, 2, 1 using while loop.
echo Enter the number :
read n
while [ $n gt 0 ]
do
echo $n
n=`expr $n - 1`
done
Posted: Fri Jul 23, 2010 3:14 pm Post subject: Difference between Linux and WindowsSubject description: Computer Science
Linux Vs Windows
Linux is an open-source Operating System. People can change codes and add programs toLinux OS which will help use your computer better. Linux evolved as a reaction to themonopoly position of windows. you can't change any code for windows OS. You can't evensee which processes do what and build your onw extension. Linux wants the programmersto extend and redesign it's OS. Linux user's can edit its OS and design new OS.
All flavors of Windows come from Microsoft. Linux come from different companies likeLIndows , Lycoris, Red Hat, SuSe, Mandrake, Knopping, Slackware.
Linux is customizable but Windows is not. For example,NASlite is a version of Linux thatruns off a single floppy disk and converts an old computer into a file server. This ultra smalledition of Linux is capable of networking, file sharing and being a web server.
Linux is freely available for desktop or home use but Windows is expensive. For server use,
8/6/2019 Imppppp Data
16/24
Linux is cheap compared to Windows. Microsoft allows a single copy of Windows to be usedon one computer. You can run Linux on any number of computers.
Linux has hign security. You have to log on to Linux with a userid and password. You canlogin as root or as normal user. The root has full previlage.
Linux has a reputation for fewer bugs than Windows.
Windows must boot from a primary partition. Linux can boot from either a primary partitionor a logical partition inside an extended partition. Windows must boot from the first harddisk. Linux can boot from any hard disk in the computer.
Windows uses a hidden file for its swap file. Typically this file resides in the same partitionas the OS (advanced users can opt to put the file in another partition). Linux uses adedicated partition for its swap file.
Windows separates directories with a back slash while Linux uses a normal forward slash.
Windows file names are not case sensitive. Linux file names are. For example "abc" and
"aBC" are different files in Linux, whereas in Windows it would refer to the same file.
Windows and Linux have different concepts for their file hierarchy. Windows uses a volume-based file hierarchy while Linux uses a unified scheme. Windows uses letters of the alphabetto represent different devices and different hard disk partitions. eg: c: , d: , e: etc.. while inlinux " / " is the main directory.
Linux and windows support the concept of hidden files. In linux hidden files begin with " . ",eg: .filename
In Linux each user will have a home directory and all his files will be save under it while inwindows the user saves his files anywhere in the drive. This makes difficult to have backup
for his contents. In Linux its easy to have backup's.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++
++++++++++++++++++++++++++++++++
1 Write script to print given number in reverse order, for eg. If no is 123 it must
print as 321.
echo Enter the number :
read n
while [ $n gt 0 ]
do
temp=`expr $n % 10`
echo n $temp
n=`expr $n \ 10`
done
8/6/2019 Imppppp Data
17/24
2 Writing the shell script to check whether a particular user is logged in or not.
Continue checking after each 60 seconds, until success.
while [ $c eq 1 ]
dowho | grep jietsetg
if [ $? eq 0 ]
then
echo User jietsetg is logged in
break;
else
echo waiting.
Sleep 60
fi
done
3 Write a shell script to find factorial on given number.
echo Enter the number :
read n
f=1
while [ $n gt 1 ]
do
f=`expr $f /* $n`
n=`expr $n - 1`
done
echo Factorial of $n = $f
4 Write a shell script to print the following pattern
1 1
22 1 2
3 3 3 1 2 3
echo Enter the number :
read n
i=1
while [ $i le $n ]
doj=1
while [ $j le $i ]
do
echo n $j
j=`expr $j +1`
done
echo
8/6/2019 Imppppp Data
18/24
8/6/2019 Imppppp Data
19/24
echo "Two Files differ each other"
fi
++++++++++++++++++++++++++++
Posted: Tue Jul 13, 2010 3:45 pm Post subject: Cloud Computing
Subject description: New TechnologyINTRODUCTIONImagine yourself in the world where the users of the computer of todays internet worlddont have to run, install or store their application or data on their own computers, imaginethe world where every piece of your information or data would reside on the Cloud(Internet).
As a metaphor for the Internet, "the cloud" is a familiar clich, but when combined with"computing", the meaning gets bigger and fuzzier. Some analysts and vendors define cloudcomputing narrowly as an updated version of utility computing: basically virtual serversavailable over the Internet. Others go very broad, arguing anything you consume outsidethe firewall is "in the cloud", including conventional outsourcing.
Cloud computing comes into focus only when you think about what we always need: a wayto increase capacity or add capabilities on the fly without investing in new infrastructure,training new personnel, or licensing new software. Cloud computing encompasses anysubscription-based or pay-per-use service that, in real time over the Internet, extends ICT'sexisting capabilities.
Cloud computing is Internet ("cloud") based development and use of computer technology("computing"). It is a style of computing in which dynamically scalable and often virtualizedresources are provided as a service over the Internet. Users need not have knowledge of,expertise in, or control over the technology infrastructure "in the cloud" that supportsthem.
Comparison
Cloud computing is often confused with grid computing ("a form of distributed computingwhereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks"), utility computing (the"packaging of computing resources, such as computation and storage, as a metered servicesimilar to a traditional public utility such as electricity") and autonomic computing("computer systems capable of self-management").Implementation
The majority of cloud computing infrastructure as of 2009 consists of reliable servicesdelivered through data centers and built on servers with different levels of virtualization
technologies. The services are accessible anywhere that has access to networkinginfrastructure. The Cloud appears as a single point of access for all the computing needs ofconsumers. Commercial offerings need to meet the quality of service requirements ofcustomers and typically offer service level agreements. Open standards are critical to thegrowth of cloud computing and open source software has provided the foundation for manycloud computing implementations.
Characterstics
8/6/2019 Imppppp Data
20/24
As customers generally do not own the infrastructure, they merely access or rent, they canavoid capital expenditure and consume resources as a service, paying instead for what theyuse. Many cloud-computing offerings have adopted the utility computing model, which isanalogous to how traditional utilities like electricity are consumed, while others are billed ona subscription basis. Sharing "perishable and intangible" computing power among multipletenants can improve utilization rates, as servers are not left idle, which can reduce costs
significantly while increasing the speed of application development. A side effect of thisapproach is that "computer capacity rises dramatically" as customers do not have toengineer for peak loads. Adoption has been enabled by "increased high-speed bandwidth"which makes it possible to receive the same response times from centralized infrastructureat other sites.
Companies
Providers including Amazon, Microsoft, Google, Sun and Yahoo exemplify the use of cloudcomputing. It is being adopted by individual users through large enterprises includingGeneral Electric, L'Oral, and Procter & Gamble.
KEY CHARACTERSTICS
1.Cost is greatly reduced and capital expenditure is converted to operational expenditure.This lowers barriers to entry, as infrastructure is typically provided by a third-party anddoes not need to be purchased for one-time or infrequent intensive computing tasks. Pricingon a utility computing basis is fine-grained with usage-based options and minimal or no ITskills are required for implementation.
2.Device and location independence enable users to access systems using a web browserregardless of their location or what device they are using, e.g., PC, mobile. As infrastructureis off-site (typically provided by a third-party) and accessed via the Internet the users canconnect from anywhere.
3.Multi-tenancy enables sharing of resources and costs among a large pool of users,allowing for:Centralization of infrastructure in areas with lower costs (such as real estate, electricity,etc.) Peak-load capacity increases (users need not engineer for highest possible load-levels)Utilization and efficiency improvements for systems that are often only 10-20% utilized.
4.Reliability improves through the use of multiple redundant sites, which makes it suitablefor business continuity and disaster recovery. Nonetheless, most major cloud computingservices have suffered outages and IT and business managers are able to do little whenthey are affected.
5.Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance
is monitored and consistent and loosely-coupled architectures are constructed using webservices as the system interface.
6.Security typically improves due to centralization of data, increased security-focusedresources, etc., but raises concerns about loss of control over certain sensitive data.Security is often as good as or better than traditional systems, in part because providers areable to devote resources to solving security issues that many customers cannot afford.Providers typically log accesses, but accessing the audit logs themselves can be difficult orimpossible.
8/6/2019 Imppppp Data
21/24
8/6/2019 Imppppp Data
22/24
Uses IT resources efficiently via sharing and higher system utilization
Reduces energy consumption Handles new and emerging workloads Scales to extreme workloads quickly and easily Simplifies IT management
Platform for collaboration and innovation Cultivates skills for next generation workforce
Posted: Sat May 29, 2010 12:01 pm Post subject: Chromium Operating SystemSubject description: Computer Science (Basics)
INTRODUCTIONPeople at Google spend much of our time working inside a browser. We search, chat, emailand collaborate in a browser. And in their spare time, they shop, bank, read news and keepin touch with friends -- all using a browser. Because do spend so much time online, thatthey began seriously thinking about what kind of browser could exist if we started fromscratch and built on the best elements out there. They realized that the web had evolvedfrom mainly simple text pages to rich, interactive applications and that we needed tocompletely rethink the browser. What was really needed was not just a browser, but also amodern platform for web pages and applications, and that's what we set out to build theCHROME OS.Google Chrome OS is an upcoming open source operating system designed by Google towork exclusively with web applications. Announced on July 7, 2009, Chrome OS is set tohave a publicly available stable release during the second half of 2010. The operatingsystem is based on Linux and will run only on specifically designed hardware. What is Chromium Operating System?
Chromium OS is an open-source project that aims to build an operating system thatprovides a fast, simple, and more secure computing experience for people who spend mostof their time on the web. Here you can review the project's design docs, obtain the sourcecode, and contribute.
HARDWARE SPECIFICATION
Google Chrome OS is initially intended for secondary devices like notebooks, not a user'sprimary PC, and will run on hardware incorporating an x86 or ARM-based processor. WhileChrome OS will support hard disk drives, Google has requested that its hardware partnersuse solid-state drives due to their higher performance and reliability, as well as the lowercapacity requirements inherent in an operating system that accesses applications and mostuser data on remote servers. Google Chrome OS consumes one-sixtieth as much drivespace as Windows 7.Companies developing hardware for the operating system include Hewlett-Packard, Acer,Adobe, Asus, Lenovo, Texas Instruments, Free scale, Intel, Samsung Australia andQualcomm.
Solid State Hard Drive (SSD)Google Chrome operating system will be downloaded only on systems having solid statehard drives. SSDs allow a faster boot time as well as a faster write time. The cost difference
8/6/2019 Imppppp Data
23/24
between SSD and HDD is pretty evident. SSD cost up to $3 per GB on the other hand HDDcost up to only 20-30 cents a GB.
ARCHITECTUREIn preliminary design documents for the Chromium OS open source project, Googledescribes a three - tier architecture: firmware, browser and window manager, and system-
level software and Userland services.
The firmware contributes to fast boot time by not probing for hardware, such as floppydisk drives, that are no longer common on computers, especially net books. The firmwarealso contributes to security by verifying each step in the boot process and incorporatingsystem recovery. System-level software includes the Linux kernel that has been patched to improve bootperformance. Userland software has been trimmed to essentials, with management byUpstart, which can launch services in parallel, re-spawn crashed jobs, and defer services inthe interest of faster booting. The window manager handles user interaction with multiple client windows much likeother X window managers.
Firmware
The firmware plays a key part to make booting the OS faster and more secure. To achievethis goal we are removing unnecessary components and adding support for verifying eachstep in the boot process. We are also adding support for system recovery into the firmwareitself. We can avoid the complexity that's in most PC firmware because we don't have to bebackwards compatible with a large amount of legacy hardware. For example, we don't haveto probe for floppy drives.
Our firmware will implement the following functionality: System recovery: The recovery firmware can re-install Chromium OS in the event that thesystem has become corrupt or compromised.
Verified boot: Each time the system boots, Chromium OS verifies that the firmware,kernel, and system image have not been tampered with or become corrupt. This processstarts in the firmware. Fast boot: We have improved boot performance by removing a lot of complexity that isnormally found in PC firmware.
System level and Userland softwareFrom here we bring in the Linux kernel, drivers, and user-land daemons. Our kernel ismostly stock except for a handful of patches that we pull in to improve boot performance.On the user-land side of things we have streamlined the init process so that we're onlyrunning services that are critical. All of the user-land services are managed by Upstart. Byusing Upstart we are able to start services in parallel, re-spawn jobs that crash, and deferservices to make boot faster.
Here's a quick list of things that we depend on: D-Bus: The browser uses D-Bus to interact with the rest of the system. Examples of thisinclude the battery meter and network picker. Connection Manager: Provides a common API for interacting with the network devices,provides a DNS proxy, and manages network services for 3G, wireless, and Ethernet. WPA Supplicant: Used to connect to wireless networks. Auto-update: Our auto update daemon silently installs new system images. Power Management: (ACPI on Intel) Handles power management events like closing the
8/6/2019 Imppppp Data
24/24
lid or pushing the power button. Xscreensaver: Handles screen locking when the machine is idle. Standard Linux services: NTP, syslog, and cron.
Window ManagerThe window manager is responsible for handling the user's interaction with multiple client
windows. It does this in a manner similar to that of other X window managers, bycontrolling window placement, assigning the input focus, and exposing hotkeys that existoutside the scope of a single browser window. Parts of the ICCCM (Inter-ClientCommunication Conventions Manual) and EWHM (Extended Window Manager Hints)specifications are used for communication between clients and the window manager wherepossible.
The window manager also uses the XComposite extension to redirect client windows to off-screen pixmaps so that it can draw a final, composited image incorporating their contentsitself. This lets windows be transformed and blended together. The Clutter library iscurrently used to animate these windows and to render them via OpenGL or OpenGL|ES.
SECURITY
Chromium OS has been designed from the ground up with security in mind. Security is not aone-time effort, but rather an iterative process that must be focused on for the life of theoperating system. The goal is that, should either the operating system or the user detectthat the system has been compromised, an update can be initiated, andafter a rebootthe system will have been returned to a known good state. Chromium OS security strives toprotect against an opportunistic adversary through a combination of system hardening,process isolation, continued web security improvements in Chromium, secure auto update,verified boot, encryption, and intuitive account management.In computer security, a sandbox is a security mechanism for separating running programs.It is often used to execute untested code, or un-trusted programs from unverified third-parties, suppliers and un-trusted users.
The sandbox typically provides a tightly-controlled set of resources for guest programs torun in, such as scratch space on disk and memory. Network access, the ability to inspect thehost system or read from input devices are usually disallowed or heavily restricted. In thissense, sandboxes are a specific example of virtualization.
In March 2010, Google software security engineer Will Drewry discussed Chrome OSsecurity. Drewry described Chrome OS as a "hardened" operating system featuring auto-updating and sandbox features that will reduce malware exposure. He said that Chrome OSnetbooks will ship with Trusted Platform Module, and include both a "trusted boot path" anda physical switch under the battery compartment that actuates a developer mode. Thatmode drops some specialized security functions but increases developer flexibility. Drewryalso emphasized that the open source nature of the operating system will contribute greatlyto its security by allowing constant developer feedback.