DATA SECURITY IN LOCAL NETWORK USING DISTRIBUTED FIREWALL A SEMINAR REPORT Submitted by ANAND KUMAR in partial fulfillment of requirement of the Degree of Bachelor of Technology (B.Tech) IN COMPUTER SCIENCE AND ENGINEERING SCHOOL OF ENGINEERING COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY KOCHI- 682022 AUGUST 2008
43
Embed
Data Security in Local Network Using Distributed Firewall
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
DATA SECURITY IN LOCAL NETWORK USING
DISTRIBUTED FIREWALL
A SEMINAR REPORT
Submitted by
ANAND KUMAR
in partial fulfillment of requirement of the Degree
of
Bachelor of Technology (B.Tech)
IN
COMPUTER SCIENCE AND ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI- 682022
AUGUST 2008
DIVISION OF COMPUTER SCIENCE AND ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI-682022
Certificate
Certified that this is a bonafide record of the seminar entitled
“DATA SECURITY IN LOCAL NETWORK USING DISTRIBUTED
FIREWALL” presented by the following student
ANAND KUMAR
of the VII semester, Computer Science and Engineering in the year 2008 in partial
fulfillment of the requirements in the award of Degree of Bachelor of Technology in
Computer Science and Engineering of Cochin University of Science and Technology.
Ms. Latha R. Nair. Dr. David Peter S.
Seminar Guide Head of Division
Date :
Acknowledgement
Many people have contributed to the success of this. Although a single sentence hardly
suffices, I would like to thank Almighty God for blessing us with His grace. I extend my
sincere and heart felt thanks to Dr. David Peter, Head of Department, Computer
Science and Engineering, for providing us the right ambience for carrying out this work. I
am profoundly indebted to my seminar guide, Ms. Latha R. Nair for innumerable acts of
timely advice, encouragement and I sincerely express my gratitude to her.
I express my immense pleasure and thankfulness to all the teachers and staff of the
Department of Computer Science and Engineering, CUSAT for their cooperation and
support.
Last but not the least, I thank all others, and especially my classmates who in one way or
another helped me in the successful completion of this work.
ANAND KUMAR
ABSTRACT
Today, computer and networking are inseparable. A number of confidential
transactions occur every second and today computers are used mostly for transmission
rather than processing of data. So Network Security is needed to prevent hacking of data
and to provide authenticated data transfer. Network Security can be achieved by Firewall.
Conventional firewalls rely on the notions of restricted topology and controlled entry
points to function. Restricting the network topology, difficulty in filtering of certain
protocols, End-to-End encryption problems and few more problems lead to the evolution
of Distributed Firewalls.
Distributed firewalls secure the network by protecting critical network
endpoints, exactly where hackers want to penetrate. It filters traffic from both the Internet
and the internal network because the most destructive and costly hacking attacks still
originate from within the organization. They provide virtually unlimited scalability. In
addition, they overcome the single point-of-failure problem presented by the perimeter
firewall.
Some problems with the conventional firewalls that lead to Distributed
Firewalls are as follows.
Depends on the topology of the network.
Do not protect networks from the internal attacks.
Unable to handle protocols like FTP and RealAudio.
Have single entry point and the failure of this leads to problems.
Unable to stop spoofed transmissions (i.e., using false source addresses).
Distribute firewall solves these problems and protecting critical network end points
where hackers want to penetrate. It filters the traffic from both the internal network and
internet because most destructive and costly hacking attacks still originate within
organization.
i
TABLE OF CONTENTS
Chapter No. Title Page
LIST OF FIGURES ii
1 Introduction 1
2 Policy and Identifiers 5
3 Distributed Firewall 6
3.1 Standard Firewall Example 10
3.2 Distributed Firewall Example 13
4 KeyNote 15
5 Implementation 20
5.1 Kernel Extensions 21
5. 2 Policy Device 24
5.3 Policy Daemon 25
5.4 Example Scenario 27
6 Work in Development 28
7 Related Work 30
8 Conclusion 32
9 References 35
ii
LIST OF FIGURES
NO: NAME PAGE
1 Architecture of Distributed Firewall 7
2 Architecture of standard firewall 10
3 Connection to the Web Server in standard firewall 11
4 Connection to the Intranet in standard firewall 12
5 Distributed Firewall 13
6 Connection to the Web Server distributed firewall 14
7 Connection to the Intranet in distributed firewall 15
8 Interactions with KeyNote 16
9 Graphical representation with all components 21
10 Filtering connect( ) and accept( ) system calls 22
Data Security in Local network Using Distributed Firewall
Department of Computer Science,SOE,CUSAT 1
1. Introduction
Conventional firewalls rely on the notions of restricted topology and control
entry points to function. More precisely, they rely on the assumption that everyone on
one side of the entry point--the firewall--is to be trusted, and that anyone on the other side
is, at least potentially, an enemy. The vastly expanded Internet connectivity in recent
years has called that assumption into question. So-called "extranets" can allow outsiders
to reach the "inside" of the firewall; on the other hand, telecommuters' machines that use
the Internet for connectivity need protection when encrypted tunnels are not in place.
While this model worked well for small to medium size networks, several trends
in networking threaten to make it obsolete:
Due to the increasing line speeds and the more computation intensive protocols
that a firewall must support (especially IPsec), firewalls tend to become
congestion points. This gap between processing and networking speeds is likely to
increase, at least for the foreseeable future; while computers (and hence firewalls)
are getting faster, the combination of more complex protocols and the tremendous
increase in the amount of data that must be passed through the firewall has been
and likely will continue to outpace Moore’s Law.
There exist protocols, and new protocols are designed, that are difficult to process
at the firewall, because the latter lacks certain knowledge that is readily available
at the endpoints. FTP and RealAudio are two such protocols. Although there
exist application-level proxies that handle such protocols, such solutions are
viewed as architecturally “unclean” and in some cases too invasive.
The assumption that all insiders are trusted has not been valid for a long time.
Specific individuals or remote networks may be allowed access to all or parts of
the protected infrastructure (extranets, telecommuting, etc.). Consequently, the
traditional notion of a security perimeter can no longer hold unmodified; for
example, it is desirable that telecommuters’ systems comply with the corporate
security policy.
2
Worse yet, it has become trivial for anyone to establish a new, unauthorized entry
point to the network without the administrator’s knowledge and consent. Various
forms of tunnels, wireless, and dial-up access methods allow individuals to
establish backdoor access that bypasses all the security mechanisms provided by
traditional firewalls. While firewalls are in general not intended to guard against
misbehavior by insiders, there is a tension between internal needs for more
connectivity and the difficulty of satisfying such needs with a centralized firewall.
IPsec is a protocol suite, recently standardized by the IETF that provides network-layer
security services such as packet confidentiality, authentication, data integrity, replay
protection, and automated key management.
This is an artifact of firewall deployment: internal traffic that is not seen by the firewall
cannot be filtered; as a result, internal users can mount attacks on other users and
networks without the firewall being able to intervene. If firewalls were placed
everywhere, this would not be necessary.
Large (and even not-so-large) networks today tend to have a large number of
entry points (for performance, failover, and other reasons). Furthermore, many
sites employ internal firewalls to provide some form of compartmentalization.
This makes administration particularly difficult, both from a practical point of
view and with regard to policy consistency, since no unified and comprehensive
management mechanism exists.
End-to-end encryption can also be a threat to firewalls, as it prevents them from
looking at the packet fields necessary to do filtering. Allowing end-to-end
encryption through a firewall implies considerable trust to the users on behalf of
the administrators.
Finally, there is an increasing need for finer-grained (and even application
specific) access control which standard firewalls cannot readily accommodate
without greatly increasing their complexity and processing requirements.
3
Other trends are also threatening firewalls. For example, some machines need more
access to the outside than do others. Conventional firewalls can do this, but only with
difficulty, especially as internal IP addresses change. End-to-end encryption is another
threat, since the firewall generally does not have the necessary keys to peek through the
encryption.
More subtly, firewalls are a mechanism for policy control. That is, they permit a site's
administrator to set a policy on external access. Just as file permissions enforce an
internal security policy, a firewall can enforce an external security policy.
Despite their shortcomings, firewalls are still useful in providing some measure of
security. The key reason that firewalls are still useful is that they provide an obvious,
mostly hassle-free, mechanism for enforcing network security policy. For legacy
applications and networks, they are the only mechanism for security. While newer
protocols typically have some provisions for security, older protocols (and their
implementations) are more difficult, often impossible, to secure. Furthermore, firewalls
provide a convenient first-level barrier that allows quick responses to newly-discovered
bugs.
To address the shortcomings of firewalls while retaining their advantages, proposed
the concept of a distributed firewall. In distributed firewalls, security policy is defined
centrally but enforced at each individual network endpoint (hosts, routers, etc.). The
system propagates the central policy to all endpoints. Policy distribution may take various
forms. For example, it may be pushed directly to the end systems that have to enforce it,
or it may be provided to the users in the form of credentials that they use when trying to
communicate with the hosts or it may be a combination of both. The extent of mutual
trust between endpoints is specified by the policy
Distributed firewalls are host-resident security software applications that protect the
enterprise network's servers and end-user machines against unwanted intrusion. They
offer the advantage of filtering traffic from both the Internet and the internal network.
This enables them to prevent hacking attacks that originate from both the Internet and the
internal network. This is important because the most costly and destructive attacks still
originate from within the organization.
4
Distributed firewalls rest on three notions: (a) a policy language that states what sort of
connections are permitted or prohibited, (b) any of a number of system management
tools, such as Microsoft's SMS or ASD and (c) IPSEC, the network-level encryption
mechanism for TCP/IP.
The basic idea is simple. A compiler translates the policy language into some internal
format. The system management software distributes this policy file to all hosts that are
protected by the firewall. And incoming packets are accepted or rejected by each "inside"
host, according to both the policy and the cryptographically-verified identity of each
sender.
To implement a distributed firewall, three components are necessary:
A language for expressing policies and resolving requests. In their simplest form,
policies in a distributed firewall are functionally equivalent to packet filtering
rules. However, it is desirable to use an extensible system (so other types of
applications and security checks can be specified and enforced in the future). The
language and resolution mechanism should also support credentials, for
delegation of rights and authentication purposes.
A mechanism for safely distributing security policies. This may be the IPsec key
management protocol when possible, or some other protocol. The integrity of the
policies transferred must be guaranteed, either through the communication
protocol or as part of the policy object description ( e.g., they may be digitally
signed).
A mechanism that applies the security policy to incoming packets or connections,
providing the enforcement part.
This is by no means a universal trait, and even today there are protocols designed
with no security review.
Our prototype implementation uses the KeyNote trust-management system, which
provides a single, extensible language for expressing policies and credentials. Credentials
5
in KeyNote are signed, thus simple file-transfer protocols may be used for policy
distribution. We also make use of the IPsec stack in the OpenBSD system to authenticate
users, protect traffic, and distribute credentials. The distribution of credentials and user
authentication occurs are part of the Internet Key Exchange (IKE) [12] negotiation.
Alternatively, policies may be distributed from a central location when a policy update is
performed, or they may be fetched as-needed (from a webserver, X.500 directory, or
through some other protocol).
Since KeyNote allows delegation, decentralized administration becomes feasible
(establishing a hierarchy or web of administration, for the different departments or even
individual systems). Users are also able to delegate authority to access machines or
services they themselves have access to. Although this may initially seem counter-
intuitive (after all, firewalls embody the concept of centralized control), in our experience
users can almost always bypass a firewall’s filtering mechanisms, usually by the most
insecure and destructive way possible ( e.g., giving away their password, setting up a
proxy or login server on some other port, etc.). Thus, it is better to allow for some
flexibility in the system, as long as the users follow the overall policy. Also note that it is
possible to “turn off” delegation.
Thus, the overall security policy relevant to a particular user and a particular end
host is the composition of the security policy “pushed” to the end host, any credentials
given to the user, and any credentials stored in a central location and retrieved on-
demand. Finally, we implement the mechanism that enforces the security policy in a
TCP-connection granularity. In our implementation, the mechanism is split in two parts,
one residing in the kernel and the other in a user-level process.
2. Policies and Identifiers
Many possible policy languages can be used, including file-oriented schemes
similar to Firmato, the GUIs that are found on most modern commercial firewalls, and
general policy languages such as KeyNote. The exact nature is not crucial, though clearly
the language must be powerful enough to express the desired policy. A sample is shown
in Figure.
6
What is important is how the inside hosts are identified. Today's firewalls rely on
topology; thus, network interfaces are designated "inside", "outside", "DMZ", etc. We
abandon this notion (but see Section), since distributed firewalls are independent of
topology.
A second common host designator is IP address. That is, a specified IP address
may be fully trusted, able to receive incoming mail from the Internet, etc. Distributed
firewalls can use IP addresses for host identification, though with a reduced level of
security.
Our preferred identifier is the name in the cryptographic certificate used with
IPSEC. Certificates can be a very reliable unique identifier. They are independent of
topology; furthermore, ownership of a certificate is not easily spoofed. If a machine is
granted certain privileges based on its certificate, those privileges can apply regardless of
where the machine is located physically.
In a different sense, policies can be "pulled" dynamically by the end system. For
example, a license server or a security clearance server can be asked if a certain
communication should be permitted. A conventional firewall could do the same, but it
lacks important knowledge about the context of the request. End systems may know
things like which files are involved, and what their security levels might be. Such
information could be carried over a network protocol, but only by adding complexity.
3. Distributed Firewalls
In a typical organizational environment, individuals are not necessarily the
administrators of the computers they use. Instead, to simplify system administration and
to permit some level of central control, a system management package is used to
administer individual machines. Patches can be installed, new software distributed, etc.
We use the same mechanisms, which are likely present in any event, to control a
distributed firewall.
7
Policy is enforced by each individual host that participates in a distributed
firewall. The security administrator--who is no longer necessarily the "local"
administrator, since we are no longer constrained by topology--defines the security policy
in terms of host identifiers. The resulting policy (probably, though not necessarily,
compiled to some convenient internal format) is then shipped out, much like any other
change. This policy file is consulted before processing incoming or outgoing messages, to
verify their compliance. It is most natural to think of this happening at the network or
transport layers, but policies and enforcement can equally
Fig.1: Architecture of Distributed Firewall
well apply to the application layer. For example, some sites might wish to force local
Web browsers to disable Java or JavaScript.
Policy enforcement is especially useful if the peer host is identified by a
certificate. If so, the local host has a much stronger assurance of its identity than in a
traditional firewall. In the latter case, all hosts on the inside are in some sense equal. If
any such machines are subverted, they can launch attacks on hosts that they would not
Security
8
normally talk to, possibly by impersonating trusted hosts for protocols such as rlogin.
With a distributed firewall, though, such spoofing is not possible; each host's identity is
cryptographically assured.
This is most easily understood by contrasting it to traditional packet filters.
Consider the problem of electronic mail. Because of a long-standing history of security
problems in mailers, most sites with firewalls let only a few, designated hosts receive
mail from the outside. They in turn will relay the mail to internal mail servers. Traditional
firewalls would express this by a rule that permitted SMTP (port 25) connections to the
internal mail gateways; access to other internal hosts would be blocked. On the inside of
the firewall, though, access to port 25 is unrestricted.
With a distributed firewall, all machines have some rule concerning port 25. The
mail gateway permits anyone to connect to that port; other internal machines, however,
permit contact only from the mail gateway, as identified by its certificate. Note how
much stronger this protection is: even a subverted internal host cannot exploit possible
mailer bugs on the protected machines.
Distributed firewalls have other advantages as well. The most obvious is that
there is no longer a single chokepoint. From both a performance and an availability
standpoint, this is a major benefit. Throughput is no longer limited by the speed of the
firewall; similarly, there is no longer a single point of failure that can isolate an entire
network. Some sites attempt to solve these problems by using multiple firewalls; in many
cases, though, that redundancy is purchased only at the expense of an elaborate (and
possibly insecure) firewall-to-firewall protocol.
A second advantage is more subtle. Today's firewalls don't have certain
knowledge of what a host intends. Instead, they have to rely on externally-visible features
of assorted protocols. Thus, an incoming TCP packet is sometimes presumed legitimate if
it has the "ACK" bit set, since such a packet can only be legitimate if it is part of an
ongoing conversation--a conversation whose initiation was presumably allowed by the
firewall. But spoofed ACK packets can be used as part of "stealth scanning". Similarly, it
9
is hard for firewalls to treat UDP packets properly, because they cannot tell if they are
replies to outbound queries, and hence legal, or if they are incoming attacks. The sending
host, however, knows. Relying on the host to make the appropriate decision is therefore
more secure.
This advantage is even clearer when it comes to protocols such as FTP. By
default, FTP clients use the PORT command to specify the port number used for the data
channel; this port is for an incoming call that should be permitted, an operation that is
generally not permitted through a firewall. Today's firewalls--even the stateful packet
filters--generally use an application-level gateway to handle such commands. With a
distributed firewall, the host itself knows when it is listening for a particular data
connection, and can reject random probes.
The most important advantage, though, is that distributed firewalls can protect
hosts that are not within a topological boundary. Consider a telecommuter who uses the
Internet both generically and to tunnel in to a corporate net. How should this machine be
protected? A conventional approach can protect the machine while tunneled. But that
requires that generic Internet use be tunneled into the corporate network and then back
out the Internet. Apart from efficiency considerations, such use is often in violation of
corporate guidelines. Furthermore, there is no protection whatsoever when the tunnel is
not set up. By contrast, a distributed firewall protects the machine all of the time,
regardless of whether or not a tunnel is set up. Corporate packets, authenticated by
IPSEC, are granted more privileges; packets from random Internet hosts can be rejected.
And no triangle routing is needed.
10
3.1 STANDARD FIREWALL EXAMPLE
11
Fig 2: Architecture of standard firewall
This is the example of architecture of standard firewall. Example contains internal and
external host, internet, firewall, network, web server, intranet web server. In internal part
of network, there is two hosts, one is trusted (Internal host 1) and other one is untrusted
(Internal host 2). Both hosts are connected to the corporate network; web server and
intranet web server are also connected to the network. Then network connected to the
internet and between network and internet, there is firewall. There is also a external host
which is connected to the internet.
12
Fig 3: Connection to the Web Server
In this figure, we can see that when internal hosts want to connected with web server,
they can connect. It is not important that they are trusted or not. For external host,
firewall allowed to connect with the web server. So external host will connect to web
server.
13
Fig 4: Connection to the Intranet
In this figure, we can see that internal hosts are connected to the intranet web server
(company private). Here, It is not also important that they are trusted or not. But for
external host, intranet web server is blocked by this firewall. External hosts are not
allowed to connect with the intranet web server. Here we can see that the disadvantage
this conventional firewall. Internal hosts are allowing to connect with the intranet web
server, but here one host is untrusted. Here there is no blocking rule for internal hosts.
14
3.2 DISTRIBUTED FIREWALL EXAMPLE
Fig 5: Distributed Firewall
This figure is the example of architecture of Distributed firewall. Example contains
internal and external host, internet, network, web server, intranet web server, and a
internal host on other side of network which is communicating through telnet. Here
firewall policy is distributed to all the systems and web server. If firewall policy allowed
connecting with server or systems, then only they connect. In internal part of network,
there are two hosts, one is trusted (Internal host 1) and other one is untrusted (Internal
host 2). Both hosts are connected to the corporate network; web server and intranet web
server are also connected to the network. Then network connected to the internet and
15
between network and internet, there is firewall. There is also an external host and internal
host using telecommuting which is connected to the internet.
Fig 6: Connection to Web Server
In this figure, we can see that when internal hosts want to connected with web server,
they can connect. Although internal host of external side of network and external host can
connect to the web server. Because they are allowed to connect with the server.
16
Fig 7: Connection to Intranet
In this figure, we can see that only internal host 1 and internal host of external side is
connected to the intranet web server (company private). Because only these two are
trusted, other are untrusted. So here it is not necessary that internal host will connect to
private server. If firewall policy allowed only then they can connect. Here it gives the
advantage of protecting the systems from internal untrusted hosts.
4. KEYNOTE
17
Trust Management is a relatively new approach to solving the authorization and
security policy problem. Making use of public key cryptography for authentication, trust
management dispenses with unique names as an indirect means for performing access
control. Instead, it uses a direct binding between a public key and a set of authorizations,
as represented by a safe programming language. This results in an inherently
decentralized authorization system with sufficient impressibility to guarantee flexibility
in the face of novel authorization scenarios
Figure 8: Application Interactions with KeyNote. The Requester is typically a user
that authenticates through some application-dependent protocol, and optionally
provides credentials. The Verifier needs to determine whether the Requester is
allowed to perform the requested action. It is responsible for providing to KeyNote
all the necessary information, the local policy, and any credentials. It is also
responsible for acting upon KeyNote’s response.
One instance of a trust-management system is KeyNote. KeyNote provides a
simple notation for specifying both local security policies and credentials that can be sent
over an untrusted network. Policies and credentials contain predicates that describe the
trusted actions permitted by the holders of specific public keys (otherwise known as
principals). Signed credentials, which serve the role of “certificates,” have the same
syntax as policy assertions, but are also signed by the entity delegating the trust.
Applications communicate with a “KeyNote evaluator” that interprets KeyNote
assertions and returns results to applications, as shown in Figure 8. However, different
18
hosts and environments may provide a variety of interfaces to the KeyNote evaluator
(library, UNIX daemon, kernel service, etc.).
A KeyNote evaluator accepts as input a set of local policy and credential
assertions, and a set of attributes, called an “action environment,” that describes a
proposed trusted action associated with a set of public keys (the requesting principals).
The KeyNote evaluator determines whether proposed actions are consistent with local
policy by applying the assertion predicates to the action environment. The KeyNote
evaluator can return values other than simply true and false, depending on the application
and the action environment definition. An important concept in KeyNote (and, more
generally, in trust management) is “monotonicity”. This simply means that given a set of
credentials associated with a request, if there is any subset that would cause the request to
be approvedthen the complete set will also cause the request to be approved. This greatly
simplifies both request resolution (even in the presence of conflicts) and credential
management. Monotonicity is enforced by the KeyNote language (it is not possible to
write non-monotonic policies).
It is worth noting here that although KeyNote uses cryptographic keys as principal
identifiers, other types of identifiers may also be used. For example, usernames may be
used to identify principals inside a host. In this environment, delegation must be
controlled by the operating system (or some implicitly trusted application), similar to the
mechanisms used for transferring credentials in Unix or in capability-based systems.
Also, in the absence of cryptographic authentication, the identifier of the principal
requesting an action must be securely established. In the example of a single host, the
operating system can provide this information.
KeyNote-Version: 2
Authorizer: "POLICY"
Licensees: "rsa-hex:1023abcd"
Comment: Allow Licensee to connect to local port 23
(telnet) from Internal addresses only, or to port 22 (ssh)
19
from anywhere. Since this is a policy, no signature field