Top Banner
1. SYSTEM ANALYSIS 1.1 project description Mostly ‘clients’ are the machines or programs that request services from another machine or server on the network. The server is linked to databases or the Web and performs the processing of the request and the delivery of the respond. It is Multithreaded so many clients can access the web through this Web Proxy Server. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. Problem statement: We go for proxy server is a server (a computer system or an application program) that acts as an intermediary for requests seeking resources from other servers. we connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server Web Proxy is a very simple HTTP proxy server written as a console application and as a windows service. So we go for JSON method to download Internet Information Server (IIS) is a World Wide Web server, IIS means that you can publish WWW pages. . 1
91
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ducument

1. SYSTEM ANALYSIS

1.1 project description Mostly ‘clients’ are the machines or programs that request services from

another machine or server on the network. The server is linked to databases or the Web and

performs the processing of the request and the delivery of the respond. It is Multithreaded so

many clients can access the web through this Web Proxy Server. A client connects to the

proxy server, requesting some service, such as a file, connection, web page, or other resource,

available from a different server.

Problem statement:

We go for proxy server is a server (a computer system or an

application program) that acts as an intermediary for requests seeking resources from other

servers. we connects to the proxy server, requesting some service, such as a file, connection,

web page, or other resource, available from a different server Web Proxy is a very simple

HTTP proxy server written as a console application and as a windows service. So we go for

JSON method to download Internet Information Server (IIS) is a World Wide Web server,

IIS means that you can publish WWW pages. .

We rigorously prove that both spatial heterogeneity (same machine

act as a client and same machine act as a server) and temporal correlations in service capacity

increase the download time in networks and then analyze a simple, distributed algorithm to

effectively remove these negative factors, thus minimizing the download time.

1

Page 2: ducument

MODULES:

Creates a connection to the remote server (remote websites)

Creates a request with the some data. (HTTP handler)

Sends the request to the remote server and returns the

response.

Compare Regular and Stream Proxy server.

Algorithms Specification

JSON (the serialization of data in JavaScript object notation) is an increasingly

popular data format, largely because it is easy to parse (or, in the case of JavaScript, simply

evaluate) into a data structure of the consumer's programming language of choice. This is a

specification for a resource-centric serialization of RDF in JSON. It aims to serialize RDF in

a structure that is easy for developers to work with.

Syntax Specification

RDF/JSON represents a set of RDF triples as a series of nested data structures. Each

unique subject in the set of triples is represented as a key in JSON object (also known as

associative array, dictionary or hash table). The value of each key is a object whose keys are

the URIs of the properties associated with each subject. The value of each property key is an

array of objects representing the value of each property.

2

Page 3: ducument

Modules Description:

1. Creates a connection to the remote server (remote websites)

Here we made a connection to the remote websites just download the whole

content of the websites in the web services. Number of bytes that are transfer from

proxy server to browser.

2. Creates a request with the some data. (HTTP handler)

It allows server side caching of content. We are streaming the bytes; we need

to make this proxy asynchronous so that it does not hold the main ASP.NET thread for

too long. Being asynchronous means it will release the ASP.NET thread as soon as it

makes a call to the external server. It generates a proper response cache header so that

the content can be cached on the browser. It does not decompress the downloaded

content in memory. It keeps the original byte stream intact. This saves memory

allocation.

3. Sends the request to the remote server and returns the response.

Such a content proxy takes an external server's URL as a query parameter.

It downloads the content from the URL, and then writes the content as the response

back to the browser. Here we download the file from that particular URL and save it in

our local system.

4. Compare Regular and Stream Proxy server.

Here we just list out the details of no of bytes transfer and time conception

of each data download from proxy server and maintain the log file of that particular

proxy details.

3

Page 4: ducument

1.2 SYSTEM STUDY

Existing System:

A proxy is the most common form of a server and is generally used to

pass requests from an isolated, private network to the Internet through a firewall. This type of

server is blocking one and it passes only synchronous. Requests may also be fulfilled by

serving from proxy rather than passing through the Internet. In existing system regular proxy

is used as a server (blocking, synchronous, download all then deliver).Regular proxy gets all

the data from the server and sends back to the browser. When the proxy is not asynchronous,

it keeps the ASP.NET thread busy until the entire connect and download operation

completes.

Proposed System:

A stream proxy is a proxy server that is installed within a single

machine. Both client and server is act as a single machine. Typically, stream proxies are used

in front of Web servers. All connections coming from the Internet addressed to one of the

Web servers are routed through the proxy server. We are streaming the bytes; we need to

make this proxy asynchronous so that it does not hold the main ASP.NET thread for too long.

Being asynchronous means it will release the ASP.NET thread as soon as it makes a call to

the external server. In this proxy the server content are just transmits from the external server

to the browser. The Streaming proxy (continuous transmission from the external server to the

browser).

4

Page 5: ducument

2. SYSTEM SPECIFICATION

Software Requirements:

OPERATING SYSTEM : Windows XP

LANGAGE : C#.Net

BROWSER : Internet Explorer 6.0

FRONT END : ASP.Net

BACK END : MS-SQL 2005

Hardware Requirements:

PROCESSOR : Pentium IV

RAM : 256 Mb

CD DRIVE : Combo 52x

KEYBOARD : 104 keys

HARD DISK : 40 GB MONITOR : 15’’ COLOR

MOUSE : LOGITECH

5

Page 6: ducument

2.3 SOFTWARE DESCRIPTION

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application

development in the highly distributed environment of the Internet.

OBJECTIVES OF .NET FRAMEWORK:

1. To provide a consistent object-oriented programming environment whether object

codes is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and

guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and

Web-based applications. To make communication on distributed environment to ensure that

code be accessed by the .NET Framework can integrate with any other code.

COMPONENTS OF .NET FRAMEWORK

THE COMMON LANGUAGE RUNTIME (CLR):

The common language runtime is the foundation of the .NET Framework. It manages

code at execution time, providing important services such as memory management, thread

management, and remoting and also ensures more security and robustness. The concept of

code management is a fundamental principle of the runtime. Code that targets the runtime is

known as managed code, while code that does not target the runtime is known as unmanaged

code.

THE .NET FRAME WORK CLASS LIBRARY:

It is a comprehensive, object-oriented collection of reusable types used to develop

applications ranging from traditional command-line or graphical user interface (GUI)

applications to applications based on the latest innovations provided by ASP.NET, such as

Web Forms and XML Web services.

6

Page 7: ducument

The .NET Framework can be hosted by unmanaged components that load the

common language runtime into their processes and initiate the execution of managed code,

thereby creating a software environment that can exploit both managed and unmanaged

features. The .NET Framework not only provides several runtime hosts, but also supports the

development of third-party runtime hosts.

Internet Explorer is an example of an unmanaged application that hosts the runtime

(in the form of a MIME type extension). Using Internet Explorer to host the runtime to

enables embeds managed components or Windows Forms controls in HTML documents.

FEATURES OF THE COMMON LANGUAGE RUNTIME:

The common language runtime manages memory; thread execution, code

execution, code safety verification, compilation, and other system services these are all run

on CLR.

Security.

Robustness.

Productivity.

Performance.

SECURITY

The runtime enforces code access security. The security features of the

runtime thus enable legitimate Internet-deployed software to be exceptionally featuring rich.

With regards to security, managed components are awarded varying degrees of trust,

depending on a number of factors that include their origin to perform file-access operations,

registry-access operations, or other sensitive functions.

ROBUSTNESS:

The runtime also enforces code robustness by implementing a strict type-

and code-verification infrastructure called the common type system (CTS). The CTS ensures

that all managed code is self-describing. The managed environment of the runtime eliminates

many common software issues.

7

Page 8: ducument

PRODUCTIVITY:

The runtime also accelerates developer productivity. For example,

programmers can write applications in their development language of choice, yet take full

advantage of the runtime, the class library, and components written in other languages by

other developers.

PERFORMANCE:

The runtime is designed to enhance performance. Although the common

language runtime provides many standard runtime services, managed code is never

interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in

the native machine language of the system on which it is executing.

C SHARPThe correct name of the programming language is C#. The substitution or omission of

a # sign is because of technical restrictions# (see section on name, pronunciation) is an

object-oriented programming language developed by Microsoft as part of the .NET initiative

and later approved as a standard by ECMA (ECMA-334) and ISO (ISO/IEC 23270). Anders

Hejlsberg leads development of the C# language, which has a procedural, object-oriented

syntax based on C++ and includes influences from aspects of several other programming

languages (most notably Delphi and Java) with a particular emphasis on simplification.

FEATURESThe following description is based on the language standard and other documents

listed in the External links section.

By design, C# is the programming language that most directly reflects the underlying

Common Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to value-

types implemented by the CLI framework. However, the C# language specification does not

state the code generation requirements of the compiler: that is, it does not state that a C#

compiler must target a Common Language Runtime (CLR), or generate Common

Intermediate Language (CIL), or generate any other specific format. Theoretically, a C#

8

Page 9: ducument

compiler could generate machine code like traditional compilers of C++ or FORTRAN; in

practice, all existing C# implementations target CLI.

C# differs from C and C++ in many ways, including:

There are no global variables or functions. All methods and members must be

declared within classes. It is possible, however, to use static methods/variables within public

classes instead of global variables/functions.

Local variables cannot shadow variables of the enclosing block, unlike C and C++.

Variable shadowing is often considered confusing by C++ texts.

C# supports a strict Boolean type, bool. Statements that take conditions, such as while

and if, require an expression of a boolean type. While C++ also has a boolean type, it can be

freely converted to and from integers, and expressions such as if(a) require only that a is

convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning

true or false" approach on the grounds that forcing programmers to use expressions that

return exactly bool can prevent certain types of programming mistakes such as if (a = b) (use

of = instead of ==).

In C#, memory address pointers can only be used within blocks specifically marked as

unsafe, and programs with unsafe code need appropriate permissions to run. Most object

access is done through safe references, which cannot be made invalid. An unsafe pointer can

point to an instance of a value-type, array, string, or a block of memory allocated on a stack.

Code that is not marked as unsafe can still store and manipulate pointers through the

System.IntPtr type, but cannot dereference them.

Managed memory cannot be explicitly freed, but is automatically garbage collected.

Garbage collection addresses memory leaks. C# also provides direct support for deterministic

finalization with the using statement (supporting the Resource Acquisition Is Initialization

idiom).

Multiple inheritance is not supported, although a class can implement any number of

interfaces. This was a design decision by the language's lead architect to avoid complication,

avoid dependency hell and simplify architectural requirements throughout CLI.

9

Page 10: ducument

C# is more type safe than C++. The only implicit conversions by default are those

which are considered safe, such as widening of integers and conversion from a derived type

to a base type. This is enforced at compile-time, during JIT, and, in some cases, at runtime.

There are no implicit conversions between Booleans and integers and between enumeration

members and integers (except 0, which can be implicitly converted to an enumerated type),

and any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++

copy constructors (which are implicit by default) and conversion operators (which are always

implicit).

Enumeration members are placed in their own namespace. Accessors called properties

can be used to modify an object with syntax that resembles C++ member field access. In C+

+, declaring a member public enables both reading and writing to that member, and accessor

methods must be used if more fine-grained control is needed. In C#, properties allow control

over member access and data validation. Full type reflection and discovery is available.

CATEGORIES OF DATE TYPESCTS separate data types into two categories:

Value Type

Reference Type

While value types are those in which the value itself is stored by allocating memory

on the stack, reference types are those in which only the address to the location where the

value is present, is stored. Value types include integers (short, long), floating-point numbers

(float, double), decimal (a base 10 number used for financial calculations), structures,

enumerators, Booleans and characters while reference types include objects, strings, classes,

interfaces and delegates.

USER-DEFIEND DATA TYPES

C# also allows the programmer to create user-defined value types, using the struct

keyword. From the programmer's perspective, they can be seen as lightweight classes. Unlike

regular classes, and like the standard primitives, such value types are allocated on the stack

rather than on the heap. They can also be part of an object (either as a field or boxed), or

stored in an array, without the memory indirection that normally exists for class types. Structs

10

Page 11: ducument

also come with a number of limitations. Because structs have no notion of a null value and

can be used in arrays without initialization, they are implicitly initialized to default values

(normally by filling the struct memory space with zeroes, but the programmer can specify

explicit default values to override this). The programmer can define additional constructors

with one or more arguments. This also means that structs lack a virtual method table, and

because of that (and the fixed memory footprint), they cannot allow inheritance (but can

implement interfaces).

C# ADVANTAGES Allows blocks of unsafe code (like C++/CLI) via the unsafe keyword.

Partial Interfaces

Iterators and the yield keyword

Multi-line comments (note that the Visual Studio IDE supports multi-line

commenting for Visual Basic .NET)

Static classes (Classes which cannot contain any non-static members, although

VB's Modules are essentially sealed static classes with additional semantics)

Can use checked and unchecked contexts for fine-grained control of

overflow/underflow checking

Auto-Implemented Properties (as of C# 3.0) (This will be available in Visual

Basic .NET beginning in version 10.)

Implicitly Typed Arrays

By default, numeric operations are not checked. This results in slightly faster

code, at the risk that numeric overflows will not be detected. However, the

programmer can place arithmetic operations into a checked context to activate

overflow checking. (It can be done in Visual Basic by checking an option)

Addition and string concatenation use the same token, +. Visual Basic .NET,

however, has separate tokens, + for addition and & for concatenation.

In Visual Basic .NET property methods may take parameters

C# is case-sensitive.

VISUAL STUDIO .NET

11

Page 12: ducument

Visual studio .NET does a wonderful job of simplifying the creation and consumption of

Web Services. Much of the programmer-friendly stuff (creating all the XML-based

documents) happens automatically, without much effort on the programmer’s side. Attribute-

based programming is a powerful concept that enables Visual Studio .NET to automate a lot

of programmer-unfriendly tasks..

Visual Studio.NET is the rapid application development tool for C#. Visual Studio.NET

offers complete integration with ASP.NET and enables to drag and drop server controls and

design Web Forms as they should appear when user views them. Some of the other

advantages of creating C# applications in Visual Studio.NET are

Visual Studio .NET is a Rapid Application (RAD) tool. Instead of adding each

control to the Web Form programmatically, it helps to add these controls by using

Toolbox, saving programming efforts.

Visual Studio .NET supports custom and composite controls. Can create custom

controls that encapsulate a common functionality that might need to use in a number of

applications.

.NET Framework

The .NET Framework is the infrastructure for the new Microsoft .NET Platform.

The .NET Framework is a common environment for building, deploying, and

running Web applications and Web services.

The .NET Framework contains a common language runtime and common class

libraries like ADO .NET, ASP.NET and Windows Forms to provide advanced standard

services that can be integrated into a variety of computer systems.

The .NET Framework provides a feature-rich application environment, simplified

development and easy integration between a numbers of different developments

languages.

The .NET Framework is language neutral. Currently it supports C++, C#, Visual

Basic, and Jscript (The modern version of JAVASCRIPT).

Microsoft’s Visual Studio.NET is a common development environment for the

new .NET Framework.

12

Page 13: ducument

Visual Studio .NET is a development environment, but it is built on and for the .NET

Framework. The .NET Framework provides, through a set of class libraries, the functionality

used by all of the .NET languages, including Microsoft Visual C# and Visual Basic .NET.

Also underlying these languages is a set of runtime services, called the common language

runtime, which manages the execution of code produced out of any and all .NET languages.

Visual Basic has been around, in various forms, for many years and has become one of

the most popular programming languages available. Over time the language has evolved,

with each successive version adding, removing, or modifying some aspect, but Visual

Basic .NET is by far the most significant change to occur to Visual Basic yet.

Common Language RuntimeOne of the design goals of .NET Framework was to unify the runtime engines so that all

developers could work with a set of runtime services. The .NET Framework’s solution is

called the Common Language Runtime (CLR). The CLR provides capabilities such as

memory management, security, and robust error handling to any language that work with

the .NET Framework.

The CLR enables languages to inter operate with one another. Memory can be allocated

by code written in one language and can be freed by code written in another language.

Similarly, errors can be raised in one language and processed in another language.

. NET Class LibrariesThe .Net Framework provides many classes that help developers re-use code. The .Net

class libraries contain code for programming topics such as threading, file I/O, database

support, XML parsing, and data structures, such as stacks and queues, this entire class library

is available to any programming languages that support the .NET Framework.

Because all languages now support the same runtime, they can reuse any class that works

with the .NET Framework. This means that any functionality available to one language will

also be available to any other .NET language.

ASP.NET

13

Page 14: ducument

ASP.NET is the next version of Active Server Pages (ASP); it is a unified Web

development platform that provides the services necessary for developers to build enterprise-

class Web applications. While ASP.NET is largely syntax compatible, it also provides a new

programming model and infrastructure for more secure, scalable, and stable applications.

ASP.NET is a compiled, NET-based environment, we can author applications in

any .NET compatible language, including Visual Basic .NET, C#, and JScript .NET.

Additionally, the entire .NET Framework is available to any ASP.NET application.

Developers can easily access the benefits of these technologies, which include the managed

common language runtime environment (CLR), type safety, inheritance, and so on.

ASP.NET has been designed to work seamlessly with HTML editors and other

programming tools, including Microsoft Visual Studio .NET. Not only does this make Web

development easier, but it also provides all the benefits that these tools have to offer,

including a GUI that developers can use to drop server controls onto a Web page and fully

integrated debugging support.

Developers can choose from the following two features when creating an

ASP.NET application. Web Forms and Web services, or combine these in any way they see

fit. Each is supported by the same infrastructure that allows you to use authentication

schemes; cache frequently used data, or customizes your application's configuration, to name

only a few possibilities.

Web Forms allows us to build powerful forms-based Web pages. When building

these pages, we can use ASP.NET server controls to create common UI elements, and

program them for common tasks. These controls allow us to rapidly build a Web Form out of

reusable built-in or custom components, simplifying the code of a page.

An XML Web service provides the means to access server functionality

remotely. Using Web services, businesses can expose programmatic interfaces to their data or

business logic, which in turn can be obtained and manipulated by client and server

applications. XML Web services enable the exchange of data in client-server or server-server

scenarios, using standards like HTTP and XML messaging to move data across firewalls.

14

Page 15: ducument

XML Web services are not tied to a particular component technology or object-calling

convention. As a result, programs written in any language, using any component model, and

running on any operating system can access XML Web services.

Each of these models can take full advantage of all ASP.NET features, as well as

the power of the .NET Framework and .NET Framework common language runtime.

Accessing databases from ASP.NET applications is an often-used technique for

displaying data to Web site visitors. ASP.NET makes it easier than ever to access databases

for this purpose. It also allows us to manage the database from your code.

ASP.NET provides a simple model that enables Web developers to write logic

that runs at the application level. Developers can write this code in the global.aspx text file or

in a compiled class deployed as an assembly. This logic can include application-level events,

but developers can easily extend this model to suit the needs of their Web application.

ASP.NET provides easy-to-use application and session-state facilities that are

familiar to ASP developers and are readily compatible with all other .NET Framework APIs.

ASP.NET offers the IHttpHandler and IHttpModule interfaces. Implementing the

IHttpHandler interface gives you a means of interacting with the low-level request and

response services of the IIS Web server and provides functionality much like ISAPI

extensions, but with a simpler programming model. Implementing the IHttpModule interface

allows you to include custom events that participate in every request made to your

application.

ASP.NET takes advantage of performance enhancements found in the .NET

Framework and common language runtime. Additionally, it has been designed to offer

significant performance improvements over ASP and other Web development platforms. All

ASP.NET code is compiled, rather than interpreted, which allows early binding, strong

typing, and just-in-time (JIT) compilation to native code, to name only a few of its benefits.

ASP.NET is also easily factorable, meaning that developers can remove modules (a session

module, for instance) that are not relevant to the application they are developing.

15

Page 16: ducument

ASP.NET provides extensive caching services (both built-in services and caching

APIs). ASP.NET also ships with performance counters that developers and system

administrators can monitor to test new applications and gather metrics on existing

applications.

Writing custom debug statements to your Web page can help immensely in

troubleshooting your application's code. However, it can cause embarrassment if it is not

removed. The problem is that removing the debug statements from your pages when your

application is ready to be ported to a production server can require significant effort.

ASP.NET offers the Trace Context class, which allows us to write custom debug

statements to our pages as we develop them. They appear only when you have enabled

tracing for a page or entire application. Enabling tracing also appends details about a request

to the page, or, if you so specify, to a custom trace viewer that is stored in the root directory

of your application.

The .NET Framework and ASP.NET provide default authorization and authentication

schemes for Web applications. We can easily remove, add to, or replace these schemes,

depending upon the needs of our application.

ASP.NET configuration settings are stored in XML-based files, which are human

readable and writable. Each of our applications can have a distinct configuration file and we

can extend the configuration scheme to suit our requirements.

Differences between ASP.NET and Client-Side Technologies

Client-side refers to the browser and the machine running the browser. Server-side on

the other hand refers to a Web server.

CLIENT-SIDE SCRIPTING

16

Page 17: ducument

JavaScript and VBScript and generally used for Client-side scripting. Client-side

scripting executes in the browser after the page is loaded. Using client-side scripting you

can add some cool features to your page. Both, HTML and the script are together in the same

file and the script is downloading as part of the page which anyone can view. A client-side

script runs only on a browser that supports scripting and specifically the scripting language

that is used. Since the script is in the same file as the HTML and as it executes on the

machine you use, the page may take longer time to download.

SERVER-SIDE SCRIPTING

ASP.NET is purely server-side technology. ASP.NET code executes on the server

before it is sent to the browser. The code that is sent back to the browser is pure HTML and

not ASP.NET code. Like client-side scripting, ASP.NET code is similar in a way that

it allows you to write your code alongside HTML. Unlike client-side scripting, ASP.NET

code is executed on the server and not in the browser. The script that you write alongside

your HTML is not sent back to the browser and that prevents others from stealing the code

you developed

ASP.NET FEATURES

ASP.NET is not just a simple upgrade or the latest version of ASP. ASP.NET

combines unprecedented developer productivity with performance, reliability, and

deployment. ASP.NET redesigns the whole process. It's still easy to grasp for new comers but

it provides many new ways of managing projects. Below are the features of ASP.NET.

Easy Programming Model

ASP.NET makes building real world Web applications dramatically easier. ASP.NET

server controls enable an HTML-like style of declarative programming that let you build

great pages with far less code than with classic ASP.  Displaying data, validating user input,

and uploading files are all amazingly easy. Best of all, ASP.NET pages work in all

browsers including Netscape, Opera, AOL, and Internet Explorer

17

Page 18: ducument

.

Flexible Language Options

ASP.NET lets you leverage your current programming language skills.  Unlike classic

ASP, which supports only interpreted VBScript and JScript, ASP.NET now supports more

than 25 .NET languages (built-in support for VB.NET, C#, and JScript.NET), giving you

unprecedented flexibility in your choice of language.

Great Tool Support

You can harness the full power of ASP.NET using any text editor, even Notepad.  But

Visual Studio .NET adds the productivity of Visual Basic-style development to the

Web. Now you can visually design ASP.NET Web Forms using familiar drag-drop-

DoubleClick techniques, and enjoy full-fledged code support including statement completion

and color-coding. VS.NET also provides integrated support for debugging and deploying

ASP.NET Web applications. The Enterprise versions of Visual Studio .NET deliver life-cycle

features to help organizations plan, analyze, design, build, test, and coordinate teams that

develop ASP.NET Web applications.  These include UML class modeling, database

modeling (conceptual, logical, and physical models), testing tools (functional, performance

and scalability), and enterprise frameworks and templates, all available within the integrated

Visual Studio .NET environment.

Rich Class Framework

Application features that used to be hard to implement, or required a 3rd-party component,

can now be added in just a few lines of code using the .NET Framework.  The .NET

Framework offers over 4500 classes that encapsulate rich functionality like XML, data

access, file upload, regular expressions, image generation, performance monitoring and

logging, transactions, message queuing, SMTP mail, and much more. With Improved

Performance and Scalability ASP.NET lets you use serve more users with the same hardware.

Compiled execution

18

Page 19: ducument

ASP.NET is much faster than classic ASP, while preserving the "just hit save" update

model of ASP.  However, no explicit compile step is required. ASP.NET will automatically

detect any changes, dynamically compile the files if needed, and store the compiled results to

reuse for subsequent requests. Dynamic compilation ensures that your application is always

up to date, and compiled execution makes it fast.  Most applications migrated from classic

ASP see a 3x to 5x increase in pages served.

Rich output caching

ASP.NET output caching can dramatically improve the performance and scalability of

your application. When output caching is enabled on a page, ASP.NET executes the page just

once, and saves the result in memory in addition to sending it to the user.   When another user

requests the same page, ASP.NET serves the cached result from memory without re-

executing the page. Output caching is configurable, and can be used to cache individual

regions or an entire page. Output caching can dramatically improve the performance of data-

driven pages by eliminating the need to query the database on every request.

Web-Farm Session State

ASP.NET session state lets you share session data user-specific state values across all

machines in your Web farm.  Now a user can hit different servers in the Web farm over

multiple requests and still have full access to her session.  And since business components

created with the .NET Framework are free-threaded, you no longer need to worry about

thread affinity.

Enhanced Reliability

ASP.NET ensures that your application is always available to your users. Memory

Leak, Deadlock and Crash ProtectionASP.NET automatically detects and recovers from

19

Page 20: ducument

errors like deadlocks and memory leaks to ensure your application is always available to your

users.  For example, say that your application has a small memory leak, and that after a week

the leak has tied up a significant percentage of your server's virtual memory. ASP.NET will

detect this condition, automatically start up another copy of the ASP.NET worker process,

and direct all new requests to the new process. Once the old process has finished processing

its pending requests, it is gracefully disposed and the leaked memory is

released. Automatically, without administrator intervention or any interruption of service,

ASP.NET has recovered from the error.

Easy Deployment

ASP.NET takes the pain out of deploying server applications. "No touch" application

deployment. ASP.NET dramatically simplifies installation of your application. With

ASP.NET, you can deploy an entire application as easily as an HTML page; just copy it to

the server.  No need to run regsvr32 to register any components, and configuration settings

are stored in an XML file within the application.

Dynamic update of running application

ASP.NET now lets you update compiled components without restarting the web

server. In the past with classic COM components, the developer would have to restart the

web server each time he deployed an update.  With ASP.NET, you simply copy the

component over the existing DLL; ASP.NET will automatically detect the change and start

using the new code.

Easy Migration Path

You don't have to migrate your existing applications to start using ASP.NET.

ASP.NET runs on IIS side-by-side with classic ASP on Windows 2000 and Windows XP

platforms. Your existing ASP applications continue to be processed by ASP.DLL, while new

20

Page 21: ducument

ASP.NET pages are processed by the new ASP.NET engine. You can migrate application by

application, or single pages.  And ASP.NET even lets you continue to use your existing

classic COM business components.

XML Web Services

XML Web services allow applications to communicate and share data over the

Internet, regardless of operating system or programming language. ASP.NET makes

exposing and calling XML Web Services simple. Any class can be converted into an XML

Web Service with just a few lines of code, and can be called by any SOAP client.  Likewise,

ASP.NET makes it incredibly easy to call XML Web Services from your application. No

knowledge of networking, XML, or SOAP is required.

Mobile Web Device Support

ASP.NET Mobile Controls let you easily target cell phones, PDAs and over 80

mobile Web devices. You write your application just once, and the mobile controls

automatically generate WAP/WML, HTML, or iMode as required by the requesting device.

Pages

ASP.NET pages, known officially as "web forms", are the main building block for

application development. Web forms are contained in files with an ASPX extension; in

programming jargon, these files typically contain static HTML or XHTML markup, as well

as markup defining server-side Web Controls and User Controls where the developers place

all the required static and dynamic content for the web page. Additionally, dynamic code

which runs on the server can be placed in a page within a block <% -- dynamic code -- %>

which is similar to other web development technologies such as PHP, JSP, and ASP, but this

practice is generally discouraged except for the purposes of data binding since it requires

more calls when rendering the page.

SQL SERVER

21

Page 22: ducument

Microsoft SQL Server includes a complete set of graphical tools and command line

utilities that allow users, programmers, and administrators to increase their productivity. The

step-by-step tutorials listed below, help you learn to get the most out of SQL Server tools so

you can work efficiently, right from the start. The following table describes the topics in this

section.

Getting started with the database engine

This tutorial is for users who are new to sql server. The tutorial reviews the basic

tools, shows you how to start the database engine, and describes how to connect to the

database engine on the same computer and also from another computer.

2.3.2 FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called

SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the

term Analysis Services. Analysis Services also includes a new data mining component. The

Repository component available in SQL Server version 7.0 is now called Microsoft SQL

Server 2000 Meta Data Services. References to the component now use the term Meta Data

Services. The term repository is used only in reference to the repository engine within Meta

Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

22

Page 23: ducument

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can

specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers

the question from one or more table. The data that make up the answer is either dynaset (if

you edit it) or a snapshot(it cannot be edited).Each time we run query, we get latest

information in the dynaset.Access either displays the dynaset or snapshot for us to view or

perform an action on it ,such as deleting or updating.

FORMS:

A form is used to view and edit information in the database record by record .A form

displays only the information we want to see in the way we want to see it. Forms use the

23

Page 24: ducument

familiar controls such as textboxes and checkboxes. This makes viewing and entering data

easy.

Views of Form:

We can work with forms in several primarily there are two views,

They are,

1. Design View

2. Form View

Design View

To build or modify the structure of a form, we work in forms design view. We can

add control to the form that are bound to fields in a table or query, includes textboxes, option

buttons, graphs and pictures.

Form View

The form view which display the whole design of the form.

REPORT:

A report is used to vies and print information from the database. The report can

ground records into many levels and compute totals and average by checking values from

many records at once. Also the report is attractive and distinctive because we have control

over the size and appearance of it.

MACRO:

24

Page 25: ducument

A macro is a set of actions. Each action in macros does something. Such as opening a

form or printing a report .We write macros to automate the common tasks the work easy and

save the time.

MODULE:

Modules are units of code written in access basic language. We can write and use module to

automate and customize the database in very sophisticated ways. SQL Statements

All operations on the information in an Oracle database are performed using SQL

statements. A SQL statement is a string of SQL text that is given to Oracle to execute.

A statement must be the equivalent of a complete SQL sentence, as in:

SELECT client FROM server;

Only a complete SQL statement can be executed, whereas a sentence fragment, such

as the following, generates an error indicating that more text is required before a SQL

statement can run:

SELECT client

A SQL statement can be thought of as a very simple, but powerful, computer program

or instruction. SQL statements are divided into the following categories:

Data definition language (DDL) statements

Data manipulation language (DML) statements

Transaction control statements

Session control statements

System control statements

Embedded SQL statements

Data Definition Language (DDL) Statements

25

Page 26: ducument

Data definition language statements define, maintain, and drop schema objects when

they are no longer needed. DDL statements also include statements that permit a user to grant

other users the privileges, or rights, to access the database and specific objects within the

database.

Data Manipulation Language (DML) Statements

Data manipulation language statements manipulate the database's data. For example,

querying, inserting, updating, and deleting rows of a table are all DML operations. Locking a

table or view and examining the execution plan of an SQL statement are also DML

operations.

Transaction Control Statements

Transaction control statements manage the changes made by DML statements. They

enable the user or application developer to group changes into logical transactions. Examples

include COMMIT, ROLLBACK, and SAVEPOINT.

Session Control Statements

Session control statements let a user control the properties of his current session,

including enabling and disabling roles and changing language settings. The two session

control statements are ALTER SESSION and SET ROLE.

System Control Statements

System control statements change the properties of the Oracle server instance. The

only system control statement is ALTER SYSTEM. It lets you change such settings as the

minimum number of shared servers, to kill a session, and to perform other tasks.

Embedded SQL Statements

26

Page 27: ducument

Embedded SQL statements incorporate DDL, DML, and transaction control

statements in a procedural language program (such as those used with the Oracle

precompilers). Examples include OPEN, CLOSE, FETCH, and EXECUTE.

3. SYSTEM DESIGN

27

Page 28: ducument

A dataflow module is a system by using external entities from which data flow

to a process which transforms the data and creates output data which goes to the other process

as inputs.

The main merit of data flow diagram is that it can be provided as an overview of

what data a system would process, what transformation of data are done, what files are used

and where the result flow. The graphical representation of the system makes it a good

communication tool between the user and an analyst. The Data flow diagram is a network

representation of a system. They are excellent mechanisms for communicating with

customers during requirements analysis.

DFD SYMBOLS:

A Square defines the source or destination of system data

In arrow identifies data flow.

A Circle represents a process that incoming data flow into outgoing

Data flow.

An open Rectangle is data store. Information that resides the bounds has

The system to be modeled

This symbol represents the database table

3.1 DATA FLOW DIAGRAM:

28

Page 29: ducument

JSON

29

Start

Create connection to remote server

Send the request to server with data

Sent the request to remote sever

Regular

Stream

Receive the return response

Download manager to store system

Minimize file download timeTake more to download

Page 30: ducument

3.2 Use Case Description

We connects to the proxy server, requesting some service, such as a file, connection,

web page, or other resource, available from a different server Web Proxy is a very simple

HTTP proxy server written as a console application and as a windows service. So we go for

JSON method to download Internet Information Server (IIS) is a World Wide Web server,

IIS means that you can publish WWW pages.

Use Case Diagram

30

Regular

Create connection to remote server

Stream

Send the request to server with data

Sent the request to remote sever

Take more to download

Minimize file download time

Download manager to store system

Receive the return response

Page 31: ducument

3.3 CLASS DIAGRAM

31

Class1-Main Create server con ()Create remote con ()

Class2- RegularRequest server ()Remote server ()

Class2- StreamRequest server ()Remote server ()

Class3- ReduceDownload ()Minimize ()

Class4-NormalStore ()Return ()

Page 32: ducument

3.4 DATA DICTIONARY:

Databases are designed using the latest software available and the development

process follows the specific requirements of the Client. We provide total flexibility in terms

of database design - the development process is essentially "Client driven".

It is important to remember that a well-designed database should provide an end

product (database) that has been tailored to meet both your professional and practical

business needs and therefore serve its intended purpose.

The database design & development process normally includes:

Comprehensive and detailed analysis of the business needs, Preparation of a design

specifications, Initial design concept, Database programming, Database sting/validation,

Client support, Client site installation and of course extensive Database Developer & Client

communication.

Tables:

A table is made of rows and columns. A row is considered a Record, it is a group of

details about one specific item of the table... A column is a field representing one particular

category of information about the records in the table.

Queries:

A table can be large depending on the information it holds. To further organize the

data, you should be able to retrieve necessary information for a specific purpose. The solution

is to create a query (or queries) in order to limit part of the data in a table for a specific goal,

for better management or search. That's the role of a query.

32

Page 33: ducument

TABLE DESCRIPTION:

Table: Report

Column Name Data Type Allow NullsUrl Varchar(100) True

Specify varchar(200) True

Method Varchar(100) TrueDate varchar(50) True

33

Page 34: ducument

4. PROJECT DESIGN

System planning

Feasibility study determines if a project can and should be taken. Once it has been

determined that software is feasible; the analyst can go ahead and prepare the software

specification which finalizes software requirements. A feasibility study is carried out to select

the best system that meets performance requirements.

In the conduct of the feasibility study, the analyst will usually consider five distinct, but

inter-related types of feasibility. They are

Technical feasibility

Operational feasibility

Economic feasibility

Management feasibility

Time feasibility

(a)Technical Feasibility

The technical feasibility depends mainly on four factors

1. Project size.

2. Project Structure

3. Development group’s expertise with the technology

4. User’s group expertise with the application

This project does not have much applicable constraints. The project size is moderate and the

level of risk is low. The project Structure is modular and is no more complex. The

development group is well versed with the technology and application. The Project provides

small guide to the technology a supplementary User Manual which is more than enough for a

novice to work on the technology.

34

Page 35: ducument

(b) Operational Feasibility

This determines whether a proposed solution is desirable within the existing managerial

and organizational framework. The proposed system is highly user friendly, self-explanatory

and platform independent. It is also flexible to any environment...

(b)Economic Feasibility

This Project is economically feasible in terms of cost-benefit analysis. This project is

economically feasible when compared to the benefits which the other projects provide. The

data mining based intruder detection provides more benefits than the existing systems

whereas the cost increases only by 20-30% which seems to be more reliable and economical.

(c) Behavioral Feasibility

The system is user friendly as it is based on GUI design. On the whole the system is

designed such that the system is user friendly and is flexible to windows environment.

(d)Time Feasibility

Time feasibility is a determination of whether proposed software can be implemented

fully within a stipulated time frame. If Software takes too much time it is likely to be

rejected. This project is implemented within a stipulated time frame.

DESIGN PHASE:

After the system analysis is carried out, Project Design is done to arrive at the

specification derived during the analysis. The design of the system is concerned with the

token generation and sending it to the user. Design is the basis for implementation, testing

and maintenance.

35

Page 36: ducument

Characteristics of a good Design:

The design should,

Be modular.

Minimize the complexity of the interfaces.

Contain distinct representation of software components.

Have modules exhibiting independent functional characteristics.

DESIGN PHASE

Design Phase requires the following information:

What are the inputs required?

How are the data organized?

What should be the screen formats?

What are the processes involved in the system?

What are the outputs produced?

DESIGN PROCESS

Design Process includes:

• Input and Output Design

• Database Design

For easy understanding, the system is broken into the following parts.

• Input Design

• Output Design

INPUT DESIGN

It aids the process of converting user oriented inputs to computer based

formats.

36

Page 37: ducument

It is made to make data entry as easy, logical and errors free as possible

validation are done for every input.

For each invalid input error message are displayed. The error messages are

easier to understand by the user.

In this project, the input design is made in the Request part.

Request selection:

In this section the user can give the request to download the file based on four

options. They are,

Plain HTTP to Regular

Plain HTTP to Stream

JSON to Regular

JSON to Stream

OUTPUT DESIGN

The output design includes:

Deciding on the information context.

A URL was identified so that using it as key address all other relevant data is

retrieved.

The primary output can be directed to the screen, printer or a file.

Output is the primary purpose of any system. Output should be easily understood by

the user. Output is what the client is buying when he or she pays for a development project.

Effective output Method should be used to produce the output design. In this project the

output design is obtained as statements for proper response or getting minimum downloading

time for given request.

37

Page 38: ducument

Database design

Database design is the process of producing a detailed data model of a database.

This logical data model contains all the needed logical and physical design choices and

physical storage parameters needed to generate a design in a Data Definition Language,

which can then be used to create a database. A fully attributed data model contains detailed

attributes for each entity.

Principally, and most correctly, it can be thought of as the logical design of the

base data structures used to store the data. In the relational model these are the tables and

views. However, the term database design could also be used to apply to the overall process

of designing, not just the base data structures, but also the forms and queries used as part of

the overall database application within the database management system (DBMS).

Design process

The process of doing database design generally consists of a number of steps which

will be carried out by the database designer. Not all of these steps will be necessary in all

cases. Usually, the designer must:

Determine the relationships between the different data elements

Superimpose a logical structure upon the data on the basis of these relationships.

Within the relational model the final step can generally be broken down into two

further steps that of determining the grouping of information within the system, generally

determining what are the basic objects about which information is being stored, and then

determining the relationships between these groups of information, or objects. This step is not

necessary with an Object database.

The tree structure of data may enforce a hierarchical model organization, with a

parent-child relationship table. An Object database will simply use a one-to-many

relationship between instances of an object class. It also introduces the concept of a

hierarchical relationship between object classes, termed inheritance.

38

Page 39: ducument

Determining data to be stored

In a majority of cases, the person who is doing the design of a database is a person

with expertise in the area of database design, rather than expertise in the domain from which

the data to be stored is drawn e.g. financial information, biological information etc. Therefore

the data to be stored in the database must be determined in cooperation with a person who

does have expertise in that domain, and who is aware of what data must be stored within the

system.

This process is one which is generally considered part of requirements analysis, and

requires skill on the part of the database designer to elicit the needed information from those

with the domain knowledge. This is because those with the necessary domain knowledge

frequently cannot express clearly what their system requirements for the database are as they

are unaccustomed to thinking in terms of the discrete data elements which must be stored.

Data to be stored can be determined by Requirement Specification.

Conceptual schema

Once a database designer is aware of the data which is to be stored within the

database, they must then determine where dependency is within the data. Sometimes when

data is changed you can be changing other data that is not visible. For example, in a list of

names and addresses, assuming a situation where multiple people can have the same address,

but one person cannot have two addresses; the name is dependent upon the address, because

if the address is different than the associated name is different too. However, the other way

around is different. One attribute can change and not another.

Physical database design

The physical design of the database specifies the physical configuration of the

database on the storage media. This includes detailed specification of data elements, data

types, indexing options and other parameters residing in the DBMS data dictionary. It is the

detailed design of a system that includes modules & the database's hardware & software

specifications of the system.

39

Page 40: ducument

5. SYSTEM IMPLEMENTATION AND TESTING

SYSTEM IMPLEMENTATION

Implementation is the most crucial stage in achieving a successful system and giving

the user’s confidence that the new system is workable and effective. Implementation of a

modified application to replace an existing one. This type of conversation is relatively easy to

handle, provide there are no major changes in the system.

Each program is tested individually at the time of development using the data and has

verified that this program linked together in the way specified in the programs specification,

the computer system and its environment is tested to the satisfaction of the user. The system

that has been developed is accepted and proved to be satisfactory for the user. And so the

system is going to be implemented very soon. A simple operating procedure is included so

that the user can understand the different functions clearly and quickly.

Initially as a first step the executable form of the application is to be created and

loaded in the common server machine which is accessible to the entire user and the server is

to be connected to a network. The final stage is to document the entire system which provides

components and the operating procedures of the system.

Implementation is the stage of the project when the theoretical design is turned out

into a working system. Thus it can be considered to be the most critical stage in achieving a

successful new system and in giving the user, confidence that the new system will work and

be effective.

The implementation stage involves careful planning, investigation of the existing

system and it’s constraints on implementation, designing of methods to achieve changeover

and evaluation of changeover methods.

Implementation is the process of converting a new system design into operation.

It is the phase that focuses on user training, site preparation and file conversion for installing

40

Page 41: ducument

a candidate system. The important factor that should be considered here is that the conversion

should not disrupt the functioning of the organization.

TESTING

Testing is the process of running a system with the intention of finding errors. It enhances the

integrity of a system by detecting deviations in design and errors in the system, and by

detecting error-prone areas. Testing also adds value to the product by conforming to the user

requirements.

Testing is essential to ensure:

Software quality

Software reliability

System assurance

Performance and capacity utilization

Testing is a part of Verification and Validation.

Verification: Are we building the system right?

Validation: Are we building the right system?

1. Verification: is the checking or testing of items, including software, for conformance

and consistency by evaluating the results against pre-specified requirements. 

[Verification: Are we building the system right?]

2. Error Detection: Testing should intentionally attempt to make things go wrong to

determine if things happen when they shouldn’t or things don’t happen when they

should.

41

Page 42: ducument

3. Validation: looks at the system correctness – i.e. is the process of checking that what

has been specified is what the user actually wanted.  [Validation: Are we building the

right system?]

TYPES OF TESTING

BLACK BOX TESTING A system or component whose inputs, outputs and general functions are known, but

whose contents or implementation is unknown or irrelevant.

Black box testing techniques

Equivalence partitioning

Boundary value analysis

WHITE BOX TESTING

It is also called as “Structural Testing” or “Logic-driven Testing” or “Glass Box

Testing” or “Clear Box testing”

In this source code is available for Testing

Structural Testing process

Program Logic-driven Testing

Design-based Testing

Examines the internal structure of program

White box testing techniques

Basis path testing

Flow graph notation

Cyclomatic complexity

Various level of Testing

1. Unit Testing

2. Functionality Testing

3. Integration Testing

42

Page 43: ducument

4. System Testing

UNIT TESTING

Unit testing focuses verification efforts on the smallest unit of the software design, the

module. This is also known as “Module Testing”. The modules are tested separately. This

testing was carried out during programming stage itself. In this testing each module is found

to be working satisfactorily as regards to the expected output from the module.

INTEGRATION TESTING

Data can be lost across an interface; one module can have adverse efforts on another.

Integration testing is the systematic testing for construction of program structure, while at the

same time conducting tests to uncover errors associated within the interface. Here correction

is difficult because the isolation of cause is complication by the cast expense of the entire

program. Thus in the integration testing step, all the errors uncovered are corrected for the

next testing steps. This evaluate the interaction and consistency of interacting components

.

Integration testing techniques are

a. Top-Down Integration

b. Bottom-up Integration

VALIDATION TESTING

At the conclusion of integration testing, software is completely assembled as a

package, interfacing errors have been uncovered and corrected and a final series of software

tests begins validation tests begin. Validation test can be tested defined in many ways. After

validation test has been conducted, one of the two possible conditions exists.

One is the function or performance characteristics confirm to specification and are accepted

and the other is deviation from specification is uncovered and a deficiency list is created.

43

Page 44: ducument

OUTPUT TESTING

After performance validation testing, the next step is output testing o the proposed

system since no system could be useful if it does not produce the required output in a specific

format. Asking the users about the format required by them tests the outputs generated by the

system under consideration. Here, the output format is considered in two ways, one is on the

screen and the other is printed format. The output format on the screen is found to be correct

as the format was designed in the system design phase according to the user needs. For the

hard copy also the output comes as the specified requirements by the user. Hence output

testing does not result any correction in the system.

USER ACCEPTANCE TESTING

User acceptance testing of a system is the key factor of the success of any system. The

system under study is tested for the user acceptance by constantly keeping in touch with the

prospective system users at any time of developing and making changes wherever required.

44

Page 45: ducument

6. FUTURE ENHANCEMENT

Here, we have designed a new distributed algorithm namely dynamically distributed

parallel periodic switching (D2PS) that effectively removes the negative factors of the

existing parallel downloading, chunk based switching, periodic switching, thus minimizing

the average download time.

There are two schemes

Parallel Permanent Connection, and

Parallel Random Periodic Switching in our dynamically distributed

parallel periodic switching (D2PS) method. In our Parallel Permanent

Connection, the downloader randomly chooses multiple source peers

and divides the file randomly into chunks and download happens in

parallel for the fixed time slot t and source selection function does not

change for that fixed time slot.

45

Page 46: ducument

7. APPENDIX

7.1 sample coding:

Regular-proxy:

using System;using System.Web;using System.Web.Caching;using System.Net;using ProxyHelpers;public class RegularProxy : IHttpHandler { public void ProcessRequest (HttpContext context) { string url = context.Request["url"]; int cacheDuration = Convert.ToInt32(context.Request["cache"] ?? "0"); string contentType = context.Request["type"];

// We don't want to buffer because we want to save memory context.Response.Buffer = false; // Serve from cache if available if (cacheDuration > 0) { if (context.Cache[url] != null) { context.Response.BinaryWrite(context.Cache[url] as byte[]); context.Response.Flush(); return; } }

using (new TimedLog("RegularProxy\t" + url)) using (WebClient client = new WebClient()) { if (!string.IsNullOrEmpty(contentType)) client.Headers["Content-Type"] = contentType; client.Headers["Accept-Encoding"] = "gzip"; client.Headers["Accept"] = "*/*"; client.Headers["Accept-Language"] = "en-US"; client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";

46

Page 47: ducument

byte[] data; using( new TimedLog("RegularProxy\tDownload Sync") ) { data = client.DownloadData(url); }

if( cacheDuration > 0 ) context.Cache.Insert(url, data, null, Cache.NoAbsoluteExpiration, TimeSpan.FromMinutes(cacheDuration), CacheItemPriority.Normal, null); if (!context.Response.IsClientConnected) return; // Deliver content type, encoding and length as it is received from the external URL context.Response.ContentType = client.ResponseHeaders["Content-Type"]; string contentEncoding = client.ResponseHeaders["Content-Encoding"]; string contentLength = client.ResponseHeaders["Content-Length"];

if (!string.IsNullOrEmpty(contentEncoding)) context.Response.AppendHeader("Content-Encoding", contentEncoding); if (!string.IsNullOrEmpty(contentLength)) context.Response.AppendHeader("Content-Length", contentLength);

if (cacheDuration > 0) HttpHelper.CacheResponse(context, cacheDuration); else HttpHelper.DoNotCacheResponse(context); // Transmit the exact bytes downloaded using (new TimedLog("RegularProxy\tResponse Write " + data.Length)) { context.Response.OutputStream.Write(data, 0, data.Length); context.Response.Flush(); } } } public bool IsReusable { get { return false; } }

}

47

Page 48: ducument

Stream – proxy:

using System;using System.Diagnostics;using System.Threading;using System.Web;using System.Net;using System.IO;using System.IO.Compression;using System.Web.Caching;using ProxyHelpers;

public class SteamingProxy : IHttpAsyncHandler {

const int BUFFER_SIZE = 8 * 1024;

private Utility.PipeStream _PipeStream; private Stream _ResponseStream; public void ProcessRequest (HttpContext context) { string url = context.Request["url"]; int cacheDuration = Convert.ToInt32(context.Request["cache"] ?? "0"); string contentType = context.Request["type"];

Log.WriteLine("--- " + url + " ----"); if (cacheDuration > 0) { if (context.Cache[url] != null) { CachedContent content = context.Cache[url] as CachedContent;

if (!string.IsNullOrEmpty(content.ContentEncoding)) context.Response.AppendHeader("Content-Encoding", content.ContentEncoding); if (!string.IsNullOrEmpty(content.ContentLength)) context.Response.AppendHeader("Content-Length", content.ContentLength); context.Response.ContentType = content.ContentType; content.Content.Position = 0; content.Content.WriteTo(context.Response.OutputStream); }

48

Page 49: ducument

}

using (new TimedLog("StreamingProxy\t" + url)) { HttpWebRequest request = HttpHelper.CreateScalableHttpWebRequest(url); // As we will stream the response, don't want to automatically decompress the content // when source sends compressed content request.AutomaticDecompression = DecompressionMethods.None;

if (!string.IsNullOrEmpty(contentType)) request.ContentType = contentType;

using (new TimedLog("StreamingProxy\tTotal GetResponse and transmit data")) using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { this.DownloadData(request, response, context, cacheDuration); } } } public bool IsReusable { get { return false; } }

private void DownloadData(HttpWebRequest request, HttpWebResponse response, HttpContext context, int cacheDuration) { MemoryStream responseBuffer = new MemoryStream(); context.Response.Buffer = false;

try { if (response.StatusCode != HttpStatusCode.OK) { context.Response.StatusCode = (int)response.StatusCode; return; } using (Stream readStream = response.GetResponseStream()) { if (context.Response.IsClientConnected) { string contentLength = string.Empty; string contentEncoding = string.Empty; ProduceResponseHeader(response, context, cacheDuration, out contentLength, out contentEncoding);

49

Page 50: ducument

//int totalBytesWritten = TransmitDataInChunks(context, readStream, responseBuffer); //int totalBytesWritten = TransmitDataAsync(context, readStream, responseBuffer); int totalBytesWritten = TransmitDataAsyncOptimized(context, readStream, responseBuffer);

Log.WriteLine("Response generated: " + DateTime.Now.ToString()); Log.WriteLine(string.Format("Content Length vs Bytes Written: {0} vs {1} ", contentLength, totalBytesWritten)); string s = contentLength.ToString(); if (cacheDuration > 0) { #region Cache Response in memory // Cache the content on server for specific duration CachedContent cache = new CachedContent(); cache.Content = responseBuffer; cache.ContentEncoding = contentEncoding; cache.ContentLength = contentLength; cache.ContentType = response.ContentType;

context.Cache.Insert(request.RequestUri.ToString(), cache, null, Cache.NoAbsoluteExpiration, TimeSpan.FromMinutes(cacheDuration), CacheItemPriority.Normal, null); #endregion } }

using (new TimedLog("StreamingProxy\tResponse Flush")) { context.Response.Flush(); } } } catch (Exception x) { Log.WriteLine(x.ToString()); request.Abort(); } }

private int TransmitDataInChunks(HttpContext context, Stream readStream, MemoryStream responseBuffer) { byte[] buffer = new byte[BUFFER_SIZE]; int bytesRead; int totalBytesWritten = 0;

50

Page 51: ducument

using( new TimedLog("StreamingProxy\tTotal read from socket and write to response") ) while ((bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE)) > 0) { using (new TimedLog("StreamingProxy\tWrite " + bytesRead + " to response")) context.Response.OutputStream.Write(buffer, 0, bytesRead); responseBuffer.Write(buffer, 0, bytesRead);

totalBytesWritten += bytesRead; }

return totalBytesWritten; }

private int TransmitDataAsync(HttpContext context, Stream readStream, MemoryStream responseBuffer) { this._ResponseStream = readStream;

_PipeStream = new Utility.PipeStreamBlock(5000); byte[] buffer = new byte[BUFFER_SIZE];

Thread readerThread = new Thread(new ThreadStart(this.ReadData)); readerThread.Start(); //ThreadPool.QueueUserWorkItem(new WaitCallback(this.ReadData));

int totalBytesWritten = 0; int dataReceived;

using (new TimedLog("StreamingProxy\tTotal read and write")) { while ((dataReceived = this._PipeStream.Read(buffer, 0, BUFFER_SIZE)) > 0) { using (new TimedLog("StreamingProxy\tWrite " + dataReceived + " to response")) { context.Response.OutputStream.Write(buffer, 0, dataReceived); responseBuffer.Write(buffer, 0, dataReceived); totalBytesWritten += dataReceived; } } }

_PipeStream.Dispose(); return totalBytesWritten;

51

Page 52: ducument

}

private int TransmitDataAsyncOptimized(HttpContext context, Stream readStream, MemoryStream responseBuffer) { this._ResponseStream = readStream;

_PipeStream = new Utility.PipeStreamBlock(10000); //_PipeStream = new Utility.PipeStream(10000); byte[] buffer = new byte[BUFFER_SIZE];

// Asynchronously read content form response stream Thread readerThread = new Thread(new ThreadStart(this.ReadData)); readerThread.Start(); //ThreadPool.QueueUserWorkItem(new WaitCallback(this.ReadData));

// Write to response int totalBytesWritten = 0; int dataReceived;

byte[] outputBuffer = new byte[BUFFER_SIZE]; int responseBufferPos = 0; using (new TimedLog("StreamingProxy\tTotal read and write")) { while ((dataReceived = this._PipeStream.Read(buffer, 0, BUFFER_SIZE)) > 0) { // if about to overflow, transmit the response buffer and restart int bufferSpaceLeft = BUFFER_SIZE - responseBufferPos;

if (bufferSpaceLeft < dataReceived) { Buffer.BlockCopy(buffer, 0, outputBuffer, responseBufferPos, bufferSpaceLeft);

using (new TimedLog("StreamingProxy\tWrite " + BUFFER_SIZE + " to response")) { context.Response.OutputStream.Write(outputBuffer, 0, BUFFER_SIZE); responseBuffer.Write(outputBuffer, 0, BUFFER_SIZE); totalBytesWritten += BUFFER_SIZE; }

// Initialize response buffer and copy the bytes that were not sent responseBufferPos = 0; int bytesLeftOver = dataReceived - bufferSpaceLeft; Buffer.BlockCopy(buffer, bufferSpaceLeft, outputBuffer, 0, bytesLeftOver); responseBufferPos = bytesLeftOver; }

52

Page 53: ducument

else { Buffer.BlockCopy(buffer, 0, outputBuffer, responseBufferPos, dataReceived); responseBufferPos += dataReceived; } }

// If some data left in the response buffer, send it if (responseBufferPos > 0) { using (new TimedLog("StreamingProxy\tWrite " + responseBufferPos + " to response")) { context.Response.OutputStream.Write(outputBuffer, 0, responseBufferPos); responseBuffer.Write(outputBuffer, 0, responseBufferPos); totalBytesWritten += responseBufferPos; } } }

Log.WriteLine("StreamingProxy\tSocket read " + this._PipeStream.TotalWrite + " bytes and response written " + totalBytesWritten + " bytes"); _PipeStream.Dispose(); return totalBytesWritten; }

private void ProduceResponseHeader(HttpWebResponse response, HttpContext context, int cacheDuration, out string contentLength, out string contentEncoding) { // produce cache headers for response caching if (cacheDuration > 0) HttpHelper.CacheResponse(context, cacheDuration); else HttpHelper.DoNotCacheResponse(context);

// If content length is not specified, this the response will be sent as Transfer-Encoding: chunked contentLength = response.GetResponseHeader("Content-Length"); if (!string.IsNullOrEmpty(contentLength)) context.Response.AppendHeader("Content-Length", contentLength);

// If downloaded data is compressed, Content-Encoding will have either gzip or deflate contentEncoding = response.GetResponseHeader("Content-Encoding"); if (!string.IsNullOrEmpty(contentEncoding)) context.Response.AppendHeader("Content-Encoding", contentEncoding);

53

Page 54: ducument

context.Response.ContentType = response.ContentType; }

private void ReadData() { byte[] buffer = new byte[BUFFER_SIZE]; int dataReceived; int totalBytesFromSocket = 0; using (new TimedLog("StreamingProxy\tTotal Read from socket")) { try { while ((dataReceived = this._ResponseStream.Read(buffer, 0, BUFFER_SIZE)) > 0) { this._PipeStream.Write(buffer, 0, dataReceived); totalBytesFromSocket += dataReceived; } } catch (Exception x) { Log.WriteLine(x.ToString()); } finally { Log.WriteLine("Total bytes read from socket " + totalBytesFromSocket + " bytes"); this._ResponseStream.Dispose(); this._PipeStream.Flush(); } } }

public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { string url = context.Request["url"]; int cacheDuration = Convert.ToInt32(context.Request["cache"] ?? "0"); string contentType = context.Request["type"];

if (cacheDuration > 0) { if (context.Cache[url] != null) { // We have response to this URL already cached SyncResult result = new SyncResult(); result.Context = context;

54

Page 55: ducument

result.Content = context.Cache[url] as CachedContent; return result; } }

HttpWebRequest request = HttpHelper.CreateScalableHttpWebRequest(url); // As we will stream the response, don't want to automatically decompress the content // when source sends compressed content request.AutomaticDecompression = DecompressionMethods.None;

if (!string.IsNullOrEmpty(contentType)) request.ContentType = contentType;

AsyncState state = new AsyncState(); state.Context = context; state.Url = url; state.CacheDuration = cacheDuration; state.Request = request; return request.BeginGetResponse(cb, state); }

public void EndProcessRequest(IAsyncResult result) { if (result.CompletedSynchronously) { // Content is already available in the cache and can be delivered from cache SyncResult syncResult = result as SyncResult; syncResult.Context.Response.ContentType = syncResult.Content.ContentType; syncResult.Context.Response.AppendHeader("Content-Encoding", syncResult.Content.ContentEncoding); syncResult.Context.Response.AppendHeader("Content-Length", syncResult.Content.ContentLength);

syncResult.Content.Content.Seek(0, SeekOrigin.Begin); syncResult.Content.Content.WriteTo(syncResult.Context.Response.OutputStream); } else { // Content is not available in cache and needs to be downloaded from external source AsyncState state = result.AsyncState as AsyncState; state.Context.Response.Buffer = false; HttpWebRequest request = state.Request;

using (HttpWebResponse response = request.EndGetResponse(result) as HttpWebResponse) { this.DownloadData(request, response, state.Context, state.CacheDuration); } }

55

Page 56: ducument

} }

7.1 Screen Shots:

Screen: Plain http- regular method

56

Page 57: ducument

Screen: Plain HTTP to Stream

57

Page 58: ducument

Screen: JSON to Regular

58

Page 59: ducument

Screen: JSON to Stream

59

Page 60: ducument

Screen: File downloading

60

Page 61: ducument

Screen: report

61

Page 62: ducument

8. CONCLUSION

In this paper we have focused on the average download time of each user in a P2P

network. With the devastating usage of network resources by P2P applications in the current

Internet, it is highly desirable to improve the network efficiency by reducing each user’s

download time. In contrast to the commonly-held practice focusing on the notion of average

capacity, we have shown that both the spatial heterogeneity and the temporal correlations in

the service capacity can significantly increase the average download time of the users in the

network, even when the average capacity of the network remains the same.

62

Page 63: ducument

BIBLIOGRAPHY

ASP.NET & C#: A Programmer's Introduction to C#, 2nd edition (Apress) - Eric Gunnerson

Component-Based Development with Visual C# (M&T books) - Ted Faison

Microsoft ASP.NET Step by Step (Microsoft Press) - G. Andrew Duthrie

Deploying and Managing Microsoft .NET Web Farms (Sams) - Barry Bloom

SQL Server:

SQL Server 2005 Black Book

Beginning Sql Server 2005 Programming

SQL Server essential reference

63