Distributed File Systems • Objectives – to understand Unix network file sharing • Contents – Installing NFS – How To Get NFS Started – The /etc/exports File – Activating Modifications The Exports File – NFS And DNS – Configuring The NFS Client – Other NFS Considerations • Practical – to share and mount NFS file systems • Summary
34
Embed
Distributed File Systems Objectives –to understand Unix network file sharing Contents –Installing NFS –How To Get NFS Started –The /etc/exports File –Activating.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Distributed File Systems
• Objectives– to understand Unix network file sharing
• Contents– Installing NFS
– How To Get NFS Started
– The /etc/exports File
– Activating Modifications The Exports File
– NFS And DNS
– Configuring The NFS Client
– Other NFS Considerations
• Practical– to share and mount NFS file systems
• Summary
NFS/DFS: An Overview
• Unix distributed filesystems are used to– centralise administration of disks
– provide transparent file sharing across a network
• Three main systems:– NFS: Network File Systems developed by Sun Microsystems 1984
– AFS: Andrew Filesystem developed by Carnegie-Mellon University
• Unix NFS packages usually include client and server components
– A DFS server shares local files on the network
– A DFS client mounts shared files locally
– a Unix system can be a client, server or both depending on which commands are executed
• Can be fast in comparasion to many other DFS– Very little overhead
– Simple and stable protocols
– Based on RPC (The R family and S family)
General Overview of NFS
• Developed by Sun Microsystems 1984
• Independent of operating system, network, and transport protocols.
• Available on many platforms including:– Linux, Windows, OS/2, MVS, VMS, AIX, HP-UX….
• Restrictions of NFS– stateless open architecture
– Unix filesystem semantics not guaranteed
– No access to remote special files (devices, etc.)
• Restricted locking– file locking is implemented through a separate lock daemon
• Industry standard is currently nfsV3 as default in– RedHat, SuSE, OpenBSD, FreeBSD, Slackware, Solaris, HP-UX, Gentoo
• Kernel NFS or UserSpace NFS
Three versions of NFS available• Version 2:
– Supports files up to 4GB long (most common 2GByte)– Requires an NFS server to successfully write data to its disks before the
write request is considered successful – Has a limit of 8KB per read or write request. (1 TCP Window)
• Version 3 is the industry standard:– Supports extremely large file sizes of up to 264 - 1 bytes – Supports files up to 8 Exabyte– Supports the NFS server data updates as being successful when the data
is written to the server's cache – Negotiates the data limit per read or write request between the client and
server to a mutually decided optimal value.
• Version 4 is coming:– File locking and mounting are integrated in the NFS daemon and operate
on a single, well known TCP port, making network security easier – Support for the bundling of requests from each client provides more
efficient processing by the NFS server.– File locking is mandatory, whereas before it was optional
Important NFS Daemons• Portmap The primary daemon upon which all the RPC rely
– Manages connections for applications that use the RPC specification– Listens to TCP port 111 for initial connection– negotiate a range of TCP ports, usually above port 1024, for further comms.– You need to run portmap on both the NFS server and client.
• Nfs (rpc.nfsd)– Starts the RPC processes needed to serve shared NFS file systems– Listens to TCP or UDP port 2049 (port can vary)– The nfs daemon needs to be run on the NFS server only.
• Nfslock (rpc.mountd)– Used to allow NFS clients to lock files on the server via RPC processes.– Neogated port UDP/TCP port– The nfslock daemon needs to be run on both the NFS server and client
• netfs– Allows RPC processes run on NFS clients to mount NFS filesystems on the
server.– The nfslock daemon needs to be run on the NFS client only.
MOUNTMOUNT
The NFS Protocol Stack aka. VFS
RPCRPC
XDRXDR
TRANSPORT, NETWORK, LINK & PHYSICAL LAYERS
NFSNFS
biod
statdlockd
mountd
nfsd
serverclient
statdlockd
RPC depend on PORTMAP which is on both client and server
RPC depend on PORTMAP which is on both client and server
Installing kernelNFS, Linux
• Check if NFS is installed with rpm
• Check if RPC portmap package installed rpm
• If not Install them, allways begin with portmap
• If you are not running SuSE– Install: portmap, nfs-utils,nfs-server (should be implemented in kernel)
How To Get kernelNFS server Started• Activate the 3 nessesary servers for NFS at boot
– NFS server demon– NFS file locking– RPC portmap
• Start the PORTMAPPER and NFS server– Which starts all dependent services– Whatever you do allways start PORTMAP first
• Check that services for NFS is running with rpcinfo
• In some Unixes you need to separately start/etc/init.d/portmap start or shortly portmap(d)/etc/init.d/nfs start or shortly nfs(d)/etc/init.d/nfslock start or shortly nfslock(d)
ro read only access rw read and write accesssync write when requested wdelay wait for synchide dont show subdirs that is exported of other exportno_all_squash remote uid’s & gid’s become equal of clientroot_squash remote root uid become anonymous on the clientno_root_squash remote root equals to local root usersquash_uids remote uid’s & gid’s are threated as identity nobody
ro read only access rw read and write accesssync write when requested wdelay wait for synchide dont show subdirs that is exported of other exportno_all_squash remote uid’s & gid’s become equal of clientroot_squash remote root uid become anonymous on the clientno_root_squash remote root equals to local root usersquash_uids remote uid’s & gid’s are threated as identity nobody
More on Shared Directories• If someone is using the shared directory, you will not be
able unshare.
• Check if someone is accessing RPC, using a shareThe first red line show that someone is using
ro read only access rw read and write accesssync write when requested wdelay wait for synchide dont show subdirs that is exported of other exportno_all_squash remote uid’s & gid’s become equal of clientroot_squash remote root uid become anonymous on the clientno_root_squash remote root equals to local root usersquash_uids remote uid’s & gid’s are threated as identity nobody
ro read only access rw read and write accesssync write when requested wdelay wait for synchide dont show subdirs that is exported of other exportno_all_squash remote uid’s & gid’s become equal of clientroot_squash remote root uid become anonymous on the clientno_root_squash remote root equals to local root usersquash_uids remote uid’s & gid’s are threated as identity nobody
The /etc/exports File, Squashing• Sample exports file using map_static
• Map_static file =/etc/squash.map
• Squash changes remote identity to selectable local identity
# /etc/squash.map# remote local commentuid 0-100 - # squash to user nobodygid 0-100 - # squash to group nobodyuid 1-200 1000 # map to uid 1000 - 1100gid 1-200 500 # map to gid 500 - 600uid 0-100 2001 # map individual user to uid 2001gid 0-100 2001 # map individual user to gid 2001
# /etc/squash.map# remote local commentuid 0-100 - # squash to user nobodygid 0-100 - # squash to group nobodyuid 1-200 1000 # map to uid 1000 - 1100gid 1-200 500 # map to gid 500 - 600uid 0-100 2001 # map individual user to uid 2001gid 0-100 2001 # map individual user to gid 2001
Activating Modifications in Exports File
• Re-reading all entries in /etc/exports file– When no directories have been exported to NFS, then the "exportfs -a"
command is used:
• After adding share(s) to /etc/exports file– When adding a share you can use the "exportfs -r" command to export only the
new entries:
• Deleting, Moving Or Modifying A Share– In this case it is best to temporarily unmount the NFS directories using the
"exportfs -ua" command followed by the "exportfs -a" command.
• Termporary export /usr/src to host’s on net 192.168.0.0
With one command share /usr/share readonly for all clients in your net#Permanently Share /etc readonly for rosies and tokyo and read/write for seoul#list the file containing the permanent shares#two commands showing what your host has shared##check who has mounted your shared directories#check who has mounted directories on rosies#check the server nfs status#From the server, with one command check that the nfs-client has portmapper running#
With one command share /usr/share readonly for all clients in your net#Permanently Share /etc readonly for rosies and tokyo and read/write for seoul#list the file containing the permanent shares#two commands showing what your host has shared##check who has mounted your shared directories#check who has mounted directories on rosies#check the server nfs status#From the server, with one command check that the nfs-client has portmapper running#
The nfsstat Command
• Server statisticsA large table arrives after command is issued
• Client statistics
• Server numbers of filehandlers– Usage information on the server's file handle cache, including the total
number of lookups, and the number of hits and misses.
– The server has a limited number of filehandlers that can be tuned
auto mount this when mount –a is useddefaults (rw suid dev exec auto nouser async)user allow regular users to mount/umountsync use syncron I/O most safesoft skip mount if server not respondinghard try until server respondsretry=minutesbg/fg retry mounting in background or foreground
auto mount this when mount –a is useddefaults (rw suid dev exec auto nouser async)user allow regular users to mount/umountsync use syncron I/O most safesoft skip mount if server not respondinghard try until server respondsretry=minutesbg/fg retry mounting in background or foreground
Possible NFS Mount options
Exercise - Using mount with NFS
What command will mount /usr/share from mash4077 on the local mount point /usr/share?
• How do I check what filesystems are mounted locally?
• Make a static mount in a01 ”/mnt/nethome” of exported ”a02:/tmp” in /etc/fstab:
• Manually mount exported a02:/usr/share as read only on a01:
• How can I show what is nfs exported on the server
##
##
##
##
##
NFS security
• NFS is inherently insecure– NFS can be run in encrypted mode which encrypts data over the network
– AFS more appropriate for security conscious sites
• User IDs must be co-ordinated across all platforms– UIDs and not user names are used to control file access (use LDAP or NIS)
– mismatched user id's cause access and security problems
• Fortunately root access is denied by default– over NFS root is mapped to user nobody
# mount | grep "/share"mail:/share on /share# iduid=318(hawkeye) gid=318(hawkeye)# touch /share/hawkeye# ssh mail ls -l /share/hawkeye-rwxr-xr-x 2 soonlee sonlee 0 Jan 11 11:21 /share/hawkeye
# mount | grep "/share"mail:/share on /share# iduid=318(hawkeye) gid=318(hawkeye)# touch /share/hawkeye# ssh mail ls -l /share/hawkeye-rwxr-xr-x 2 soonlee sonlee 0 Jan 11 11:21 /share/hawkeye
NFS Hanging
• Run NFS on a reliable network
• Avoid having NFS servers that NFS mount each other's filesystems or directories
• Always use the sync option whenever possible
• Mission critical computers shouldn't rely on an NFS server to operate
• Dont have NFS shares in search path
NFS Hanging continued
• File Locking– Known issues exist, test your applications carefullý
• Nesting Exports– NFS doesn't allow you to export directories that are subdirectories of directories
that have already been exported unless they are on different partitions.
• Limiting "root" Access– no_root_squash
• Restricting Access to the NFS server– You can add user named "nfsuser" on the NFS client to let this user squash
access for all other users on that client
• Use nfsV3 if possible
NFS Firewall considerations• NFS uses many ports
– RPC uses TCP port 111– NFS server itself uses port 2049– MOUNTD listens on neogated UDP/TCP port’s– NLOCKMGR listens on neogated UDP / TCP port’s– Expect almost any TCP/UDP port over 1023 can be allocated for NFS
• NFS need a STATEFUL firewall– A stateful firewall will be able dealing with traffic that originates from inside a
network and block traffic from outside
• SPI can demolish NFS– Stateful packet inspection on cheaper routers/firewalls can missinteprete NFS
traffic as DOS attacks and start drop packages
• NFSSHELL– This is a hacker tool, it can hack some NFS– Invented by Leendert van Doom
• Use VPN and IPSEC tunnels– With complex services like NFS IPSEC or some kind of VPN should be
considered if used in untrusted networks.
Common NFS error messages
NFS Automounter for clients or servers• Automatically mount directories from server when needed• To activate automount manually and at boot
– Management of shares centralized on server– Increases security and reduces lockup problems with static shares
• Main configuration sit in /etc/auto.master– Simple format is: MOUNT-KEY MOUNT-OPTIONS LOCATION
– MOUNT-KEY is local mountpoint, here /doc, /- (from root) and /home– MOUNT-OPTIONS is the standard mount options previously described, here -ro– LOCATION can be a direct share on a server like server and map file auto.direct
and indirect like /etc/auto.home.
• Common configuration /etc/auto.misc is for floppy/cd/dvd.• Centralized administration need to set /etc/nsswitch.conf
peter server:/home/peter kalle akvarius:/home/bob walker iss:/home/bunny
peter server:/home/peter kalle akvarius:/home/bob walker iss:/home/bunny
Wildcards In Map Files
• Wildcards In Map Files– The asterisk (*), which means all
– the ampersand (&), which instructs automounter to substitute the value of the key for the & character.
• Using the Ampersand Wildcard /etc/auto.home
– the key is peter, so the ampersand wildcard is interpreted to mean peter too. This means you'll be mounting the server:/home/peter directory.
• Using the Asterisk Wildcard /etc/auto.home
– In the example below, the key is *, meaning that automounter will attempt to mount any attempt to enter the /home directory. But what's the value of the ampersand? It is actually assigned the value of the key that triggered the access to the /etc/auto.home file. If the access was for /home/peter, then the ampersand is interpreted to mean peter, and server:/home/peter is mounted. If access was for /home/kalle, then akvarius:/home/kalle would be mounted.
peter server:/home/& peter server:/home/&
* bigboy:/home/&* bigboy:/home/&
Other DFS Systems
• RFS: Remote File Sharing– developed by AT&T to address problems with NFS
– stateful system supporting Unix filesystem semantics
– uses same SVR4 commands as NFS, just use rfs as file type
– standard in SVR4 but not found in many other systems
• AFS: Andrew Filesystem– developed as a research project at Carnegie-Mellon University
– now distributed by a third party (Transarc Corporation)
– available for most Unix platforms and PCs running DOS, OS/2, Windows
– uses its own set of commands
– remote systems access through a common interface (the /afs directory)
– supports local data caching and enhanced security using Kerberos
– fast gaining popularity in the Unix community
Summary
• Unix supports file sharing across a network
• NFS is the most popular system and allows Unix to share files with other O/S
• Servers share directories across the network using the share command
• Permanent shared drives can be configured into /etc/fstab
• Clients use mount to access shared drives
• Use mount and exportfs to look at distributed files/catalogs