Top Banner
[email protected] Training Manual EduCARMA 11 1. OVERVIEW OF THE LINUX OPERATING SYSTEM 11 1.1. What is an Operating System? 11 1.1.1. Features of OS 12 1.2. Introduction to Linux Operating System 14 1.2.1. History of Linux 14 1.2.2. Linux kernel and distributions. 14 1.2.3. Open Source Nature of Linux 16 1.3. Structure of Linux OS and the linux kernel 16 1.3.1. Overview of the Linux OS and Kernel Structure 16 1.3.2. Modular kernel 19 2. BASICS OF LINUX 19 2.1. The Linux Shell 19 2.1.1. Types of linux shell 20 2.2. File System / Directory Structure 20 2.2.1. FileSystem Hierarchy Standard 20 2.3. Elementary Linux Commands 23 2.3.1. User/Group Management 23 2.3.2. Some basic linux commands 27 2.4. The X Window System 31 2.4.1. Running X 31 2.4.1.1). Starting X 32 2.4.1.2). Stopping X 33 2.4.2. Running a Program in X 33 2.4.3. Command Line Options to X Client 34 2.4.3.1). Specifying Window Size and Location 34 2.4.3.2). Specifying Window Colors 34
433
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Linux-Training-Volume1

[email protected] Training Manual EduCARMA 11 1. OVERVIEW OF THE LINUX OPERATING SYSTEM 11 1.1. What is an Operating System? 11 1.1.1. Features of OS 12 1.2. Introduction to Linux Operating System 14 1.2.1. History of Linux 14 1.2.2. Linux kernel and distributions. 14 1.2.3. Open Source Nature of Linux 16 1.3. Structure of Linux OS and the linux kernel 16 1.3.1. Overview of the Linux OS and Kernel Structur e 16 1.3.2. Modular kernel 19 2. BASICS OF LINUX 19 2.1. The Linux Shell 19 2.1.1. Types of linux shell 20 2.2. File System / Directory Structure 20 2.2.1. FileSystem Hierarchy Standard 20 2.3. Elementary Linux Commands 23 2.3.1. User/Group Management 23 2.3.2. Some basic linux commands 27 2.4. The X Window System 31 2.4.1. Running X 31 2.4.1.1). Starting X 32 2.4.1.2). Stopping X 33 2.4.2. Running a Program in X 33 2.4.3. Command Line Options to X Client 34 2.4.3.1). Specifying Window Size and Location 34 2.4.3.2). Specifying Window Colors 34

Page 2: Linux-Training-Volume1

2.4.3.3). Running a Shell in X 35 3. FILE MANIPULATION AND MANAGEMENT 35 3.1. Files and Directories 35 3.1.1. Naming Files and Directories 35 3.1.2. Making an Empty File/Directory 36 3.1.3. Changing Directories 36 3.2. File Permissions 36 3.2.1. Concept of File Permissions and Ownership 36 3.2.2. Interpreting file permissions 37 3.2.3. File Permission Dependencies 38 3.2.3.1). User file-creation mode mask 39 3.2.4. Changing permissions 40 3.2.5. Understanding File Permissions Beyond "rwx" 41 3.2.5.1). 's' bit or 'Set User ID'/ SUID and 'Set G roup ID' / SGID 41 3.2.5.2). 't' bit or 'Sticky' bit : 42 3.2.5.3). The Other Mysterious Letters - "d", "l", "b", "c", "p" 43 3.2.5.4). Setting SUID, SGID, sticky bit on a singl e file 43 3.3. Managing file links 43 3.3.1. Hard links 43 3.3.2. Symbolic Links 45 3.4. File ownership and Attributes 45 3.4.1. Determining the Ownership of a File 45 3.4.2. Changing the Ownership of a File 46 3.4.3. Determing the advanced attributes of a file 46 3.4.4. Changing advanced Attributes of a File 47 3.5. Finding Files 47 3.5.1. Finding All Files That Match a Pattern 47 3.5.2. Finding Files in a Directory Tree 48

Page 3: Linux-Training-Volume1

3.5.2.1). Finding Files in a Directory Tree by Name 48 3.5.2.2). Finding Files in a Directory Tree by Size 48 3.5.2.3). Finding Files in a Directory Tree by Modi fication Time 49 3.5.2.4). Finding Files in a Directory Tree by Owne r 50 3.5.2.5) Running Commands on the Files You Find 50 3.5.3. Finding Files in Directory Listings 50 3.5.3.1). Finding the Largest Files in a Directory 50 3.5.3.2). Finding the Smallest Files in a Directory 51 3.5.3.3). Finding the Smallest Directories 51 3.5.3.4). Finding the Largest Directories 51 3.5.3.5). Finding the Number of Files in a Listing 51 3.5.4. Finding Where a Command Is Located 52 3.6. Managing Files 52 3.6.1. Determining File Type and Format 52 3.6.2. Changing File Modification Time 53 3.6.3. Splitting a File into Smaller Ones 53 3.6.4. Comparing Files 54 3.6.4.1). Determining Whether Two Files Differ usin g 'cmp' 54 3.6.4.2). Finding the Differences between Files usi ng 'diff' 54 3.6.4.3). Patching a File with a Difference Report 55 3.6.5. File Compression/Decompression 55 3.6.5.1). Compression/Decompression Tools 56 3.6.5.2). Archiving Files at the Shell Prompt 57 4. TEXT MANAGEMENT AND EDITORS 59 4.1. The 'vi' editor 59 4.1.1. Starting "vi" 60 4.1.2. Inserting text. 60 4.1.3. Deleting text 60 4.1.4. Changing text 61

Page 4: Linux-Training-Volume1

4.1.5. Commands for moving the cursor 61 4.1.6. Saving files and quitting vi 61 4.1.7. Editing another file 62 4.1.8. Running shell commands 62 4.2. The Emacs Editor 63 4.2.1. Getting Acquainted with Emacs 63 4.2.1.1). Basic Emacs Editing Keys 63 4.3. The pico editor 65 4.4. The editor “joe†� 66 4.5. Text Manipulation 66 4.5.1. Searching for Text 67 4.5.2. Matching Text Patterns using Regular Express ions 67 4.5.2.1). MetaCharacters and their meaning 68 4.5.2.2). Matching Lines Ending with Certain Text 6 9 4.5.2.3). Matching Lines of a Certain Length 69 4.5.2.4). Matching Lines That Contain Any of Some R egexps 70 4.5.2.5). Matching Lines That Contain All of Some R egexps 70 4.5.2.6). Matching Lines That Don't Contain a Regex p 70 4.5.2.7). Matching Lines That Only Contain Certain Characters 71 4.5.2.8). Using a List of Regexps to Match From 71 4.5.3. Searching More than Plain Text Files 71 4.5.4. Matching Lines in Web Pages 71 4.5.5. Searching and Replacing Text 72 5. MORE ABOUT SHELL & COMMAND LINE INTERFACE 72 5.1. Passing Special Characters to Commands 72 5.2. Letting the Shell Complete What You Type 72 5.3. Repeating the Last Command You Typed 73 5.4. Running a List of Commands 73

Page 5: Linux-Training-Volume1

5.5. Redirecting Input and Output 73 5.5.1. Redirecting Input to a File 74 5.5.2. Redirecting Output to a File 74 5.5.3. Redirecting Error Messages to a File 74 5.5.4. Redirecting Output to Another Command's Inpu t 75 6. BASICS OF LINUX SYSTEM ADMINISTRATION 75 6.1. Disks, Partitions and File Systems 75 6.1.1. Character and Block devices 75 6.1.2. Partitions/MBR 76 6.1.2.1). Why Partition Hard Drive(s) 76 6.1.2.2). Master Boot Record or MBR 77 6.1.2.3). Partitioning Scheme 78 6.1.2.4). Partition types 79 6.1.2.5). Partitioning a hard disk 80 6.1.2.6). Various Mount Points 80 6.1.2.7). Device files and partitions 83 6.1.3. FileSystems 83 6.1.3.1). Some of the Linux Filesystems 84 6.1.4. Software RAID 85 6.1.4.1). Advantages of using RAID 86 6.1.4.2). Hardware and Software RAID 86 6.1.4.3). Different Types of Raid Implementations 8 6 6.1.5. Logical Volume Manager (LVM) 91 6.2.RedHat Installation and Hardware Configuration 92 6.2.1. Preparing for Installation 92 6.2.1.1). Installation Disk Space Requirements 92 6.2.1.2). Installation Methods 93 6.2.1.3). Choosing the Installation Class 94 6.2.1.4). Hardware/System Information Required 95

Page 6: Linux-Training-Volume1

6.2.2. RedHat Installation Procedure 96 6.2.2.1). Initial Installation Steps 97 6.2.3. Disk Partitioning Setup 98 6.2.3.1). Automatic Partitioning 98 6.2.3.2). Manual Partitioning Using Disk Druid 98 6.2.3.3). Recommended Partitioning Scheme 100 6.2.3.4). Adding Partitions 100 6.2.4. Boot Loader Configuration 101 6.2.4.1). Advanced Boot Loader Configuration 102 6.2.5. Network Configuration 103 6.2.6. Firewall Configuration 103 6.2.7. Language Support Selection 103 6.2.8. Time Zone Configuration 104 6.2.9. Set Root Password 104 6.2.10. Authentication Configuration 104 6.2.11. Package Group Selection 105 6.2.12. Boot Diskette Creation 105 6.2.13. Hardware Configuration 105 6.2.14. Installation Complete 106 6.3. System Administration Commands 106 6.3.1. Process Management 106 6.3.1.1). Process task_struct data structure 107 6.3.1.2). ps 111 6.3.1.3). top 112 6.3.1.4). pstree 114 6.3.1.5). kill 115 6.3.1.6). killall 116 6.3.1.7). fuser 116

Page 7: Linux-Training-Volume1

6.3.1.8). pidof 116 6.3.1.9). skill 117 6.3.1.10). Background Process - & 117 6.3.1.11). nice 117 6.3.1.12). snice 118 6.3.1.13). /proc/$PID directory 118 6.3.2. System Startup and Shutdown 119 6.3.2.1). The Boot Process 119 6.3.2.2). The Init Program 120 6.3.2.3). Runlevels 122 6.3.2.4). System Processes 124 6.3.2.5). The Linux Login Process 126 6.3.2.6). Single – User Mode 126 6.3.2.7). Shutting Down 127 6.3.3. Memory Management and Performance Monitoring 128 6.3.3.1). Virtual Memory / Swap Space 128 6.3.3.2). Swapping In and Swapping Out 129 6.3.3.3). Commands which show the current memory us age 129 6.3.3.4). Creating a swap space 129 6.3.3.5). Using a Swap Space 130 6.3.3.6). Disk Buffering/ Buffer cache 131 6.3.3.7). Direct Memory Access or DMA 132 6.3.3.8). Resource Monitoring Tools 133 6.3.4. Disk Management Tools 136 6.3.4.1). Listing a Disk's Free Space 136 6.3.4.2). Listing a File's Disk Usage 136 6.3.4.3). Partitioning a Hard Drive 137 6.3.5. File System Management 139 6.3.5.1). Creating a filesystem 139

Page 8: Linux-Training-Volume1

6.3.5.2). Mounting/Unmounting File Systems, fstab & mtab 139 6.3.5.3). Checking File System Integrity 143 6.3.6. Disk Quota Management 144 6.3.6.1). Configuring and Implementing Disk Quotas on Partitions 145 6.3.6.2). Managing Disk Quotas 148 6.3.7. RAID Setup 149 6.3.7.1). Linear Raid Setup 150 6.3.7.2). RAID-0 Setup 151 6.3.7.3). RAID-1 Setup 152 6.3.7.4). RAID-5 Setup 153 7. NETWORKING AND NETWORK SERVICES 153 7.1. Networking Overview 153 7.1.1. OSI Reference Model 153 7.1.2. TCP/IP Networks 155 7.1.2.1). Layers in the TCP/IP Protocol Architectur e 156 7.1.3. LAN Network 156 7.1.3.1). Area Networks 156 7.1.3.2). LAN Basics 157 7.1.3.3). LAN Protocols and the OSI Reference Model 158 7.1.3.4). LAN Media-Access Methods 159 7.1.3.5). LAN Transmission Methods 161 7.1.3.6). LAN Topologies 161 7.1.3.7). LAN Devices 163 7.1.4. WAN Basics 166 7.1.4.1). WAN Networks 167 7.1.4.2). WAN Virtual Circuits 169 7.1.4.3). WAN Devices 170 7.1.4.4). Other Area Networks 172

Page 9: Linux-Training-Volume1

7.1.5. Ethernet and Networking Hardware 172 7.1.5.1). Ethernet Network Medium 173 7.1.5.2). Ethernet Network Interface 175 7.1.6. Internet Protocol or IP Address 175 7.1.6.1). IP Address Notation and Classes of Networ ks 176 7.1.7. Transmission Control Protocol 177 7.1.8. User Datagram Protocol 178 7.1.9. Connection Ports 178 7.1.10. Address Resolution 178 7.1.11. IP Routing 178 7.1.11.1). Subnetworks 179 7.1.11.2). Gateways 180 7.1.11.3). Routing Table 180 7.2. Linux Network Administration 181 7.2.1. Network Configuration Files 181 7.2.2. Network Administration Commands 182 7.2.2.1). IP Address Assignment 182 7.2.2.2). Setting up Routing 184 7.2.2.3). Network Monitoring/ Analysis Tools 186 7.2.2.4) Changing the System Hostname 188 7.2.2.5). Networking terms 189 7.2.3. Packet Filtering Using Iptables 190 7.2.3.1). Network Address Translation (NAT) 190 7.2.3.2). Packet filtering tables 190 7.2.3.3). Built –In Chains for the different tabl es 191 7.2.3.4). Types of Targets 191 7.2.3.5). The Iptables Commandline 192 7.3. Network Information Service (NIS) 198 7.3.1. NIS Maps 198

Page 10: Linux-Training-Volume1

7.3.2. NIS Domain 198 7.3.2.1). NIS Topologies used 198 7.3.3. NIS Server Installation and Configuration 19 9 7.3.3.1). Installing the NIS Server utility 199 7.3.3.2). Setting up the NIS domain name 200 7.3.3.3). Configuring and starting the deamon ypser v 200 7.3.3.4). Initializing the NIS Maps 202 7.3.3.5). Starting the NIS Password Deamon 202 7.3.3.6). Starting the Server Transfer deamon 203 7.3.3.7). Modifying the startup process to start NI S at Boot 203 7.3.4). Installing and Configuring the NIS Client 2 03 7.3.4.1). Installing the ypbind utility 203 7.3.4.3). Configure and start the NIS client deamon 204 7.3.4.4). Test the Client daemon 204 7.3.4.5). Configuring the NIS Client startup files 205 7.3.4.6). NIS Configuration Files/Commands 205 7.3.5. More about NIS 208 7.4. Network File Systems (NFS) 209 7.4.1. Main Configuration Files 209 7.4.1.1). /etc/exports file 209 7.4.1.2). /etc/hosts.allow and /etc/hosts.deny 210 7.4.2. NFS Server Setup 212 7.4.2.1). Pre-requisites 212 7.4.2.2). The NFS Daemons and starting them 213 7.4.2.3). Verifying that NFS is running 214 7.4.2.4). Making changes to /etc/exports later on 2 15 7.4.3. Setting up an NFS Client 215 7.4.3.1). Mounting remote directories 215

Page 11: Linux-Training-Volume1

7.4.3.2). Getting NFS File Systems to Be Mounted at Boot Time 216 7.4.3.3). Options for Mounting 216 7.4.4. Using Automount services (Autofs) 217 7.4.4.1). Autofs Setup 217 7.4.4.2). Starting and Stopping Autofs 218 7.5. TCP Wrappers and Xinetd Services 219 7.5.1. TCP Wrappers 219 7.5.1.1). Advantages of TCP Wrappers 220 7.5.1.2). TCP Wrappers Configuration Files 220 7.5.2. Xinetd 222 7.5.2.1). /etc/xinetd.conf 223 7.5.2.2). The /etc/xinetd.d/ Directory 224 7.5.2.3). Access Control Options 225 7.5.2.4). Logging Options 227 7.5.2.5). Binding and Redirection Options 227 8. SHELL SCRIPTING 229 8.1. Shell Scripting Basics 229 8.1.1. Variables in Shell 230 8.1.1.1). Defining User-defined variables 231 8.1.1.2). Rules for naming variables 232 8.1.1.3). The “echo†� command 232 8.1.2. Shell arithmetic 232 8.1.3. Understanding Quotes inside the Shell 233 8.1.4. Finding the Exit Status of a Command Executi on 234 8.1.5. Reading input from the Standard Input 235 8.1.6. Command Line Arguments 235 8.1.7. Structured Language Constructs 236 8.1.7.1). Decision Making 236 8.1.7.2). Flow Control 237

Page 12: Linux-Training-Volume1

8.1.7.3). Loop Constructs 240 8.1.7.4). Debugging a Shell script 243 8.2. Advanced Shell Scripting 244 8.2.1. /dev/null 244 8.2.2. Conditional Execution using && and || 244 8.2.3. I/O Redirection and file descriptors 245 8.2.4. Essential Utilities 245 8.2.4.1). cut 245 8.2.4.2). paste 247 8.2.4.3). join 247 8.2.4.4). tr 248 8.2.4.5). uniq 249 8.2.5. Awk Utility 249 8.2.5.1). Understanding Awk Basic Examples 249 8.2.5.2). Doing arithmetic and user defined variabl es with awk 251 8.2.6. The sed Utility 253 8.2.6.1). Sample sed Commands/Scripts 254 9. INSTALLING LINUX SOFTWARE/KERNEL 256 9.1. RPM Installations 256 9.1.1. Getting the RPM source 256 9.1.2. Manually installing rpms 257 9.1.3. RPM Installation Errors 257 9.1.4. Installing Source Rpms 258 9.1.5. Listing Installed RPMs 259 9.1.6. Listing Files Associated with RPMs 259 9.1.6.1). Listing Files for Already Installed RPMs 260 9.1.6.2). Listing Files in RPM Files 260 9.1.6.3). Listing the RPM to Which a File Belongs 2 61

Page 13: Linux-Training-Volume1

9.1.7. Uninstalling Rpms 261 9.2. Software Installations from Source using Tarba lls 261 9.2.1. The GCC Compiler 261 9.2.2. Steps for installing from Tarball 262 9.3. Linux Kernel Recompilation 263 9.3.1. Linux kernel – A Modular Kernel 263 9.3.2. Recompiling the kernel 264 9.3.2.1) PreRequisites 264 9.3.2.2) Checking the current kernel and Redhat ver sion 265 9.3.2.3). Kernel Recompilation Steps 265 9.3.3. Command Line Tools for Kernel level administ ration 269 9.3.3.1). Kernel Modules Management 269 9.4 . More About Lilo and Grub 270 9.4.1. Grub (Grand Unified Boot loader) 270 9.4.1.1). Stages in Grub Loading 271 9.4.1.2). Direct Loading and Chain Loading Booting Methods 271 9.4.1.3). Naming Conventions and Partitions used by Grub 272 9.4.1.4). Installing and Booting Grub 274 9.4.1.5). GRUB Interfaces 275 9.4.1.6). GRUB Commands 276 9.4.1.7). GRUB Menu Configuration File 278 9.4.1.8). Changing Runlevels at Boot Time 280 9.4.2. LILO or Linux Loader 280 9.4.2.1). LILO Booting stages 281 9.4.2.2) Lilo Configuration File 281 9.4.2.3). Installing lilo 283 9.4.2.4). Changing Runlevel at Boot Time 283 10. LINUX SERVICES 284 10.1. Open SSH Server 284

Page 14: Linux-Training-Volume1

10.1.1. Configuring an OpenSSH server 284 10.1.2. Configuring an OpenSSH Client 284 10.1.2.1). Using the SSH command 285 10.1.2.2). Using the scp Command 286 10.1.2.3). Using the sftp Command 287 10.1.2.4). Generating Key Pairs 287 10.2. Berkeley Internet Name Domain (BIND) Server 2 89 10.2.1. Nameserver Zones 289 10.2.2. Types of Nameservers 290 10.2.3. BIND as a Nameserver 290 10.2.3.1). Configuration Files 290 10.3. File Transfer Program or FTP 291 10.3.1. FTP server/client 292 10.3.2. FTP Commandline Interface 292 10.3.2.1) Anonymous FTP 294 10.3.2.2) Common FTP Commands 294 10.4. Service Manager : chkconfig ,ntsysv , xinetd 295 10.4.1. ChkConfig 295 10.4.1.1). Chkconfig commandline Usage 296 10.4.2. Ntsysv 297 10.4.3. Xinetd Services 297 10.5. Telnet Program 298 10.6. Dynamic Host Configuration Protocol (DHCP) 29 8 10.6.1. Advantages of DHCP 299 10.6.2. DHCP server/Client 299 10.6.2.1). DHCP server configuration file 299 10.6.2.2). DHCP communication between server-client 299 10.6.2.3). DHCP Client configuration 301

Page 15: Linux-Training-Volume1

10.7. Linux Samba Server 301 10.7.1. Samba configuration file 301 10.7.1. Samba password file for Clients 302 10.8. Linux Proxy Server – Squid 302 10.8.1. Squid Package and Config File 303 10.8.2. Stopping , Starting and Restarting Squid 30 3 10.8.3. Configuring squid Clients 303 11. SECURING LINUX SYSTEMS 303 11.1. Physical Security 304 11.2. Local Security 304 11.2.1. Checking for Unlocked Accounts 304 11.2.2. Checking for Unused Accounts 305 11.3. Files and File system Security 305 11.3.1. Default Umask 305 11.3.2. SUID/SGID Files 306 11.3.3. World-Writable Files 306 11.3.4. Setting File System Limits 307 11.3.5. Unowned Files 307 11.3.6. Protecting Binaries like Compilers 307 11.3.7. Integrity Checking 308 11.3.8. Trojan Horses, Backdoors and Rootkits 308 11.3.8.1). Nmap tool 309 11.4. Password Security and Encryption 311 11.4.1. Encryption Methods 311 11.4.1.1). DES (Data Encryption Standard) 311 11.4.1.2). PGP and Public-Key Cryptography 311 11.4.2. Authentication Methods 312 11.4.2.1). PAM - Pluggable Authentication Modules 3 12 11.4.2.2). Cryptographic IP Encapsulation (CIPE) 31 2

Page 16: Linux-Training-Volume1

11.4.2.3). Kerberos 313 11.4.3. Enforcing Stronger Passwords 313 11.4.4. Locking User Accounts After Many Login Fail ures 314 11.4.5. Restricting Direct Login for System/Shared Accounts 315 11.4.6. Password Cracking/Brute Force Attack 316 11.4.6.1). How the brute force attack works? 316 11.4.6.2). Signs of a brute force attempt 317 11.4.6.3). Tools to stop and prevent brute force ha ck attempts 317 11.5. Network Security 318 11.5.1. Network Intruders and Attacks 318 11.5.1.1). Packet Sniffers 318 11.5.1.2). Denial Of Service (DOS) Attacks 318 11.5.1.3). Attacks via IP Spoofing 322 11.5.2. TCP Wrappers and xinetd 324 11.5.2.1). Controlling DOS Attacks Via Xinetd 325 11.5.3. SATAN, ISS, and Other Network Scanners 326 11.5.3.1). Detecting Port Scans 327 11.5.4. Securing SSH 327 11.5.5. Securing NFS 328 11.5.5.1). Restricting Incoming NFS Requests 329 11.5.6. Kernel Tunable Security Parameters 330 11.5.6.1). Enable TCP SYN Cookie Protection 331 11.5.6.2). Disable IP Source Routing 331 11.5.6.3). Disable ICMP Redirect Acceptance 331 11.5.6.4). Enable IP Spoofing Protection 331 11.5.6.5). Enable Ignoring to ICMP Requests 332 11.5.6.6). Enable Ignoring Broadcasts Request 332 11.5.6.7). Enable Bad Error Message Protection 332

Page 17: Linux-Training-Volume1

11.5.6.8).Enable Logging of Spoofed/Source Routed/R edirect Packets 332 1. OVERVIEW OF THE LINUX OPERATING SYSTEM 1.1. What is an Operating System? In simple terms, an operating system is a manager. It manages all the available resources on a computer. These resources can be the hard disk, a printer, or the monitor screen. Even memory is a resource that need s to be managed. Within an operating system are the management functions that determine who gets to read data from the hard disk, what file is going to be p rinted next, what characters appear on the screen, and how much memory a certain program gets. For example, if you own a car, you don't really nee d to know the details of the internal combustion engine to understand that this is what makes the car move forward. You don't need to know the principles of h ydraulics to understand what isn't happening when pressing the brake pedal has n o effect. An operating system is like that. You can work prod uctively for years without even knowing what operating system you're running o n, let alone how it works. Sometimes things go wrong. In many companies, you a re given a number to call when problems arise, you report what happened, and it is dealt with. By having a working knowledge of the principles of an operating system you are in a better position to understand not only the pro blems that can arise, but also what steps are necessary to find a solution. T here is also the attitude that you have a better relationship with things you understand. Like in a car, if you see steam pouring out from under the hood, y ou know that you need to add water. This also applies to the operating system. Linux is an operating system like many others, such as DOS, Macintosh etc. In this section, I am going to discuss what goes into an operating system, what it does, how it does it, and how you, the user, are af fected by all this. 1.1.1. Features of OS 1. Multitasking

Page 18: Linux-Training-Volume1

An Operating system that is capable of allowing mul tiple software processes to be run at the same time. It can do so by actually s witching back and forth between each tasks extremely fast. This is the conc ept of multitasking. That is, the computer is working on multiple tasks "at the s ame time." 2. Multi-users A multi-user Operating System allows for multiple u sers to use the same computer at the same time and/or different times. That is, t he operating system needs to keep track of whose program, or task, is currently writing its file to the printer or which program needs to read a certain sp ot on the hard disk, etc. This is the concept of multi-users, as multiple use rs have access to the same resources. 3. Multi Processing A Multi Processing Operating System is one which is capable of supporting and utilizing more than one computer processor. Multipr ocessing systems are much more complicated than single-process systems becaus e the operating system must allocate resources to competing processes in a reas onable manner. Therefore, if a computer has multiple CPUs, it can do multiproces sing. 4. Process Management One basic concept of an operating system is the pro cess. A process is more than just a program. Especially in a multi-user, multi-t asking operating system such as UNIX, there is much more to consider. Each progr am has a set of data that it uses to do what it needs. Often, this data is not p art of the program. For example, if you are using a text editor, the file y ou are editing is not part of the program on disk, but is part of the process in memory. If someone else were to be using the same editor, both of you would be u sing the same program. However, each of you would have a different process in memory. * Child/Parent Process : When you log onto a Li nux system, you usually get access to a command line interpreter, or shell. Thi s takes your input and runs programs for you. If you were to start up an editor , your file would be loaded and you could edit your file. The interesting thing is that the shell has not gone away. It is still in memory. The editor is sim ply another process that belongs to you. Because it was started by the shell , the editor is considered a "child" process of the shell. The shell is the pare nt process of the editor. (A process has only one parent, but may have many chil dren.)

Page 19: Linux-Training-Volume1

* Daemons : In addition to user processes, such as shells, text editors, and databases, there are system processes running. Thes e are processes that were started by the system. Several of these deal with m anaging memory and scheduling turns on the CPU. Others deal with delivering mail, printing, and other tasks that we take for granted. In principle, both of the se kinds of processes are identical. However, system processes can run at muc h higher priorities and therefore run more often than user processes. Typic ally a system process of this kind is referred to as a daemon process or backgrou nd process because they run behind the scenes (i.e. in the background) without user intervention. It is also possible for a user to put one of his or her proces ses in the background. In short, the OS keeps track of all the processes r unning on the system and also manages multitasking and multiprocessing. 5. Memory Management On UNIX, when you run a program (like any of the sh ell commands you have been using), the actual computer instructions are read f rom a file on disk from one of the bin/ directories and placed in RAM. The prog ram is then executed in memory and becomes a process. When the process has finished running, it is removed from memory. The CPU assists the operating system in managing us ers and processes. This shows how multiple processes might look in memory: You can see that many processes could be sharing th e same portion of the memory. We'll look into this topic in more detail at a late r stage. 1.2. Introduction to Linux Operating System 1.2.1. History of Linux Linux is a freely distributable version of UNIX. * UNIX was born at the end of the 1960's and be gan as a one-man project designed by Ken Thompson of Bell Labs and had grown to become the most widely used operating system.

Page 20: Linux-Training-Volume1

* Linus Torvalds, who was then a student at the University of Helsinki in Finland, developed Linux in 1991. It was released f or free on the Internet. * He was inspired by MINIX which was written fr om scratch by Andrew S. Tanenbaum, a US-born Dutch professor who wanted to teach his students the inner workings of a real operating system. It was designe d to run on the Intel 8086 microprocessors that had flooded the world market. As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available and served as a source of inspiration for Torvolds. 1.2.2. Linux kernel and distributions. Linux kernel is the core of the Linux OS and is cal led the “Chief of Operations†�. Although Linux is technically only the kernel, it is commonly considered to be all of the associated programs and utilities. Combined with the kernel, the utilities and often some applications c omprise a commercial distribution. A distro comprises a prepackaged kernel, system uti lities, GUI interfaces and application programs and its the kernel which puts the linux into all the distributions. Some of the popular Linux distros are RedHat, Mandr ake, Suse ,Debian etc. * RedHat RedHat Linux is considered by many to be the best d istribution for beginners. It is designed for those who simply want to get Linux working on their system with a minimum amount of effort. * Mandrake Mandrake is a good choice for someone is who is jus t starting Linux and wants all the new hardware support. The best thing about Mandrake is that its still RedHat compatible, so support is as plentiful as Re dHat support from the Linux Community. * Debian

Page 21: Linux-Training-Volume1

Debian is for those who would like to learn the inn er workings of Linux, yet demand more friendly features than are provided wit h distros like Slackware. Prior knowledge of Unix and Linux is recommended be fore trying this distribution. * Slackware Slackware is one of the oldest distributions of Lin ux. It lacks many 'user-friendly' features that can be taken for granted wi th many other distros. * SuSE Originally begun as a German Linux distribution, Su SE has become increasingly popular in the US and is the number one Linux disti bution in Europe. It is considered one of the most complete distros availab le, with many software packages available for almost any application. SuSE is a great distro for beginners, on par with Red Hat. * Corel Corel is a distribution aimed at new users, offerin g an attractive graphical interface and quick setup. Installing new applicati ons not included with the distribution is troublesome, however. * LinuxPPC LinuxPPC is a powerful and easy-to-use port of Linu x to the PowerPC platform. * FreeBSD FreeBSD is a "Linux-like" free Unix operating syste m based on the BSD source code. Its main focus is for servers, but it can als o function as a workstation OS, supporting most Linux applications. The extensi ve "Ports Collection" makes installation of software simple and relatively pain less, but hardware support tends to lag behind Linux. * Fedora and RedHat Enterprise Linux Fedora and RedHat Enterprise Linux are two descenda nts of Red Hat Linux .

Page 22: Linux-Training-Volume1

The Fedora Project is one of the sources for new te chnologies and enhancements that may be incorporated into Red Hat Enterprise Li nux in the future. The goal of the Fedora Project is to work with the Linux com munity to build a complete, general purpose operating system exclusively from o pen source software. RedHat Enterprise Linux is based on subscription wh ich comes with a charge and has both Server as well as Client Solutions. 1.2.3. Open Source Nature of Linux Linux is developed under the GNU General Public Lic ense which means the source code for Linux is freely available to everyone.The GNU project by Richard Stallman was a software movement to provide free an d quality software.The first organized effort to produce open source software wa s the Free Software Foundation (FSF), founded by Richard M. Stallman (k nown as RMS) in 1985 The FSF developed this concept into the GNU Public License (GPL), a software distribution license that stipulates (in a nutshell ): * Software released under the GPL shall be free ly distributable * The software shall be distributed along with its source code * Anyone is free to modify the source code and change the program, as long as the resulting program is also freely distributab le and modifiable. Around half of the open source software available t oday is made available under the terms of the GPL. 1.3. Structure of Linux OS and the linux kernel 1.3.1. Overview of the Linux OS and Kernel Structur e The Linux operating system is composed of four majo r subsystems as shown in the diagram below: * User Applications -- the set of applications in use on a particular Linux system will be different depending on what the comp uter system is used for, but typical examples include a text editor and a web-br owser. * O/S Services -- these are services that are t ypically considered part of the operating system (a windowing system, command s hell, etc.); also, the

Page 23: Linux-Training-Volume1

programming interface to the kernel (compiler tool and library) is included in this subsystem. * Linux Kernel -- this is the main area of inte rest which abstracts and mediates access to the hardware resources, includin g the CPU. * Hardware Controllers -- this subsystem is com prised of all the possible physical devices in a Linux installation; for examp le, the CPU, memory hardware, hard disks, and network hardware are all members of this subsystem. The Linux kernel presents a virtual machine interfa ce to user processes. The kernel actually runs several processes concurrently , and is responsible for mediating access to hardware resources so that each process has fair access to processor memory while inter-process security is ma intained. The Linux kernel is composed of five main subsystem s: 1. The Process Scheduler (SCHED) : is responsibl e for controlling process access to the CPU. The scheduler enforces a policy that ensures that processes will have fair access to the CPU, while ensuring th at necessary hardware actions are performed by the kernel on time. 2. The Memory Manager (MM) : permits multiple pr ocess to securely share the machine's main memory system. In addition, the memo ry manager supports virtual memory that allows Linux to support processes that use more memory than is available in the system. Unused memory is swapped o ut to persistent storage using the file system then swapped back in when it is needed. It also handles requests for run-time memory allocation. 3. The Virtual File System (VFS): abstracts the details of the variety of hardware devices by presenting a common file interf ace to all devices. In addition, the VFS supports several file system form ats that are compatible with other operating systems. 4. The Network Interface (NET): provides access to several networking standards and a variety of network hardware. 5. The Inter-Process Communication (IPC) : subsy stem supports several mechanisms for process-to-process communication on a single Linux system. Processes communicate with each other and with the kernel to coordinate their activities. A visual representation of the structure of the lin ux kernel is given below. * dd This diagram emphasizes that the most central subsystem is the process scheduler: all other subsystems depend on the proce ss scheduler since all

Page 24: Linux-Training-Volume1

subsystems need to suspend and resume processes. Us ually a subsystem will suspend a process that is waiting for a hardware op eration to complete, and resume the process when the operation is finished. The other dependencies are somewhat less obvious, b ut equally important: * The process-scheduler subsystem uses the memo ry manager to adjust the hardware memory map for a specific process when tha t process is resumed. * The inter-process communication subsystem dep ends on the memory manager to support a shared-memory communication mechanism. Th is mechanism allows two processes to access an area of common memory in add ition to their usual private memory. * The virtual file system uses the network inte rface to support a network file system (NFS), and also uses the memory manager to provide a ramdisk device. * The memory manager uses the virtual file syst em to support swapping; this is the only reason that the memory manager depends on the process scheduler. When a process accesses memory that is currently sw apped out, the memory manager makes a request to the file system to fetch the mem ory from persistent storage, and suspends the process. On top of these five components comes the System Ca ll Interface that hides the hardware layer for the user applications. We'll be dealing with these topics in more detail later. 1.3.2. Modular kernel * One of the greatest advantage of Linux Kernel is it's modular structure. Most of the Linux kernel is built as a collection o f source modules. * The required modules are compiled together wh ile the kernel is being built. But that's not all. The Linux kernel has the ability to load and unload

Page 25: Linux-Training-Volume1

the modules according to the requirement on the fly without the requirement of system shutdowns. That is the reason why the Linux kernel is a Dynamic Kernel. * This is also the reason why Linux can run on such a wide variety of hardware platforms. A developer has only to port th e machine specific modules to support new hardware. 2. BASICS OF LINUX 2.1. The Linux Shell Linux is a multitasking, multiuser operating system , which means that many people can run many different applications on one c omputer at the same time. Before you can use a newly installed Linux system, you must set up a user account for yourself. It's usually not a good idea to use the root account for normal use; you should reserve the root account for running privileged commands and for maintaining the system. 2.1.1. Types of linux shell Shell is a linux commandline interface and there ar e different types of shell in Linux. Each shell has its own pro's and con's, but each shell can perform the same basic tasks. The main difference between them is the prompt, and how they interpret commands. * bash : Bourne Again Shell , developed by Free Software Foundation * sh : Bourne Shell , named after its creator S teve Bourne * csh : C Shell , came as part of Unix implemen tation * ksh : Korn Shell named after David Korn.

Page 26: Linux-Training-Volume1

All the shells above come as a standard part of any Linux distro. The most common shell used by default on Linux systems is ba sh. In bash, the default prompt for a user is a $ sign. Unless you are logge d in as root in which case it the # sign. When you enter a command, the shell does several th ings. * First, it checks the command to see if it is internal to the shell. (That is, a command which the shell knows how to execute itself. There are a number of these commands, and we'll go into them later.) * The shell also checks to see if the command i s an alias, or substitute name, for another command. * If neither of these conditions apply, the she ll looks for a program, on disk, having the specified name. If successful, the shell runs the program, sending the arguments specified on the command line . 2.2. File System / Directory Structure 2.2.1. FileSystem Hierarchy Standard * In Linux (and Unix), everything is a file. Ra ther, everything is mapped by the system on to a file. Thus, a hard-disk partitio n is one file, a detected hardware device is a file, a semaphore for IPC is s till another. * Linux file-system structure is like a tree wi th the root Directory denoted as '/'. The entire system resides under this root d irectory. Everything starts from the root directory, represented by '/', and th en expands into sub-directories. Where DOS/Windows had various partitio ns and then directories under those partitions, Linux places all the partitions u nder the root directory by 'mounting' them under specific directories. * The official way files are organized in Linux is called the "Filesystem Hierarchy Standard" (FHS). The following directories, or symbolic links to dir ectories, are required in /.

Page 27: Linux-Training-Volume1

1. . bin ---------- Essential command binaries /bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesys tems are mounted (e.g. in single user mode). It may also contain commands whi ch are used indirectly by scripts. * There must be no subdirectories in /bin. * It should not be mounted separately. 2. boot ------- Static files of the boot loader This directory contains everything required for the boot process except configuration files not needed at boot time and the map installer. Thus /boot stores data that is used before the kernel begins e xecuting user-mode programs. This may include saved master boot sectors and sect or map files. * The operating system kernel must be located i n /boot. * Its usually mounted as a separate partition o n the hard-disk. 3. dev -------- Device files This is a very interesting directory that highlight s one important characteristic of the Linux filesystem - everything is a file or a directory. Look through this directory and you should see hda1 , hda2 etc which represent the various partitions on the first master drive of the system. /dev/cdrom and /dev/fd0 represent your CDROM drive and your floppy drive. This may seem strange but it will make sense if you compare the character istics of files to that of your hardware. Both can be read from and written to . Take /dev/dsp, for instance. This file represents your speaker device. So any data written to this file will be re-directed to your speaker. Try 'cat /etc/lilo.conf > /dev/dsp' and you should hear some sound on the speaker. * It should not be mounted separately.

Page 28: Linux-Training-Volume1

2. etc -------- Host-specific system configuration The /etc hierarchy contains configuration files. A "configuration file" is a local file used to control the operation of a progr am; it must be static and cannot be an executable binary. * No binaries may be located under /etc. 3. home --------- User home directories (optional) Linux is a multi-user environment so each user is a lso assigned a specific directory which is accessible only to them and the system administrator. These are the user home directories, which can be found u nder /home/username 4. lib -------- Essential shared libraries and kern el modules This contains all the shared libraries that are req uired by system programs. Windows equivalent to a shared library would be a D LL file.These libraries are needed to boot the system and run the commands in t he root filesystem, ie. by binaries in /bin and /sbin. 5. media ------- Mount point for removeable media ( Optional) This directory contains subdirectories which are us ed as mount points for removeable media such as floppy disks, cdroms and z ip disks. 6. mnt -------- Mount point for mounting a files ystem temporarily This is a generic mount point under which you mount your filesystems or devices. Mounting is the process by which you make a filesys tem available to the system. After mounting your files will be accessible under the mount-point. This directory usually contains mount points or sub-dire ctories where you mount your floppy and your CD. You can also create additional mount-points here if you want. 7. opt --------- Add-on application software packag es /opt is reserved for the installation of add-on app lication software packages. A package to be installed in /opt must locate its sta tic files in a separate /opt/<package> directory tree, where <package> is a name that describes the software package 8. sbin ---------- Essential system binaries

Page 29: Linux-Training-Volume1

This directory contains all the binaries that are e ssential to the working of the system. These include system administration as well as maintenance and hardware configuration programs. Find lilo, fdisk, init, ifconfig etc here. These are the essential programs that are required by all the users. Another directory that contains system binaries is /usr/sbi n. This directory contains other binaries of use to the system administrator. This is where you will find the network daemons for your system along with othe r binaries that only the system administrator has access to. 9. srv ----------- Data for services provided by this system (Optional) /srv contains site-specific data which is served by this system. 10. tmp ---------- Temporary files This directory contains mostly files that are requi red temporarily. Many programs us this to create lock files and for tempo rary storage of data. 11. usr ----------- Secondary hierarchy /usr is the second major section of the filesystem. It needs to be safe from being overwritten when the system software is updat ed. * Locally installed software must be placed wit hin /usr/local rather than /usr unless it is being installed to replace or upg rade software in /usr. * X and its supporting libraries can be found h ere. User programs like telnet, ftp,apache etc are also placed here. * /usr/doc contains useful system documentation . /usr/src/linux contains the source code for the Linux kernel. 12. var ---------- Variable data /var contains variable data files. This includes sp ool directories and files, administrative and logging data, and transient and temporary files. Some portions of /var are not shareable between differen t systems. For instance, /var/log, /var/lock, and /var/run. This directory c ontains spooling data like mail and also the output from the printer daemon. * The system logs are also kept here in / var/log/messages.

Page 30: Linux-Training-Volume1

* You will also find the database for BIN D in /var/named and for NIS in /var/yp. 12. proc --------- Memory resident file system The Proc psuedo file system is a real time, memory resident file system that tracks the processes running on your machine and th e state of your system. The most striking factor about the /proc file system is the fact that the file system doesn't exist on any particular media. The / proc File System is a pseudo file system residing in the virtual memory and main tains highly dynamic data on the state of your operating system. Most of the information in the /proc file system is updated to match the current state of the operating system. The contents of the /proc file system can be read by anyone who has the requisite permissions. * Have you ever wondered where exactly the info rmation dished out to you by the "ps" and the "top" process comes from? The info rmation for these processes come from the /proc file system which is updated on the fly as changes take place in the processes. More info on FHS: Reference link : http://www.pathname.com/fhs/pub/fh s-2.3.html 2.3. Elementary Linux Commands 2.3.1. User/Group Management Before you can use a newly installed Linux system, you must set up a user account for yourself. It's usually not a good idea to use the root account for normal use; you should reserve the root account for running privileged commands and for maintaining the system. * Users can be either people, meaning accounts tied to physical users, or accounts which exist for specific applications to u se such as the apache user. * Groups are logical expressions of organizatio n, tying users together for a common purpose. Users within the same group can rea d, write, or execute files owned by the group.

Page 31: Linux-Training-Volume1

* Each user and group have a unique numerical i dentification number called a userid (UID) and a groupid (GID) respectively. * On Linux servers, user and group ids lower th an 100 are reserved for priveleged system users on the linux machine. The following command line tools can be used to man age users and groups: 1. Creating a User In order to create an account for yourself, log in as root and use the useradd or adduser command. $ useradd carma * When a user carma is added, there is an entry created inside the configuration file /etc/passwd corresponding to tha t user as below. carma:x:504:509::/home/carma:/bin/bash * The number 504 is the user id for the user †˜carma’ on the linux machine and 509 the group id of the group to which the user carma belongs. 2. Setting password for the user You can set the password for a user using the comma nd "passwd". The same command stands good for changing a user password as well. $ passwd carma * In multiuser environments it is very importan t to use shadow passwords (provided by the shadow-utils package). *

Page 32: Linux-Training-Volume1

Doing so enhances the security of system auth entication files. * For this reason, the Red Hat Linux installati on program enables shadow passwords by default. And hence, the passwords set for a linux user is stored inside the file ‘/etc/shadow’ in encrypted form . 3. Logging In At login time, you'll see the a prompt resembling t he following on your screen: Here, enter your username, and press the Return key . Now, enter your password. It won't be echoed to the screen when you login, so type carefully. If you mistype your password, you'l l see the message that the login is incorrect and you'll have to try again. On ce you have correctly entered the username and password, you are officially logge d into the system 3. Logging out At the shell prompt, use the command "exit" to logo ut of the shell or by using <Ctrl-d>. $ exit 3. Deleting a User A linux user can be deleted by using the commandlin e ‘userdel’.This command will delete the files from the users home directory , the entry for this user from /etc/passwd, /etc/group ,/etc/shadow. $ userdel <user> 3. Modifying a User

Page 33: Linux-Training-Volume1

The usermod command modifies the system account fil es to reflect the changes that are specified, like Home dir, password, etc. o n the command line. Some example usages for the usermod command is given bel ow: * Create the new home directory for carma in /h ome2 & move old dir contents to this directory. $ usermod -d /home2/carma carma * Set carma's initial group as carma12. $ usermod -g carma12 carma * Set the new passwd for carma to ‘newpass’ . $ usermod –p newpass carma * Set Bash as the default login shell for carma . $ usermod -s /bin/bash carma * Lock a user's password. This puts a “!†� in front of the encrypted password for that user inside /etc/shadow file, eff ectively disabling the password. $ usermod –L carma * Unlock a user’s password. It’ll remove th e lock( !) from the password field for that user in /etc/shadow. $ usermod –U carma 3. Creating User Groups

Page 34: Linux-Training-Volume1

The group for a user can be created using the "grou padd" command. $ groupadd nobody * When a group is added, the group info gets st ored inside the file /etc/group, and the entry for the group nobody is a s shown below. nobody:x:99: * In the entry below, 99 is the groupid of the group ‘nobody’. 3. Deleting User Groups The group for a user can be deleted by using the †œgroupdel†� command. Deleting a group removes the group info from the /e tc/group file. $ groupdel nobody 9. Modifying User Group A user group can be modified using the ‘groupmodâ €™ command. The groupmod command modifies the system account files to reflec t the changes that are specified on the command line for a group. The two options available with this are * Change the group id of a group .Note that the gid specified should be unique. $ groupmod –g <gid> <group-name> eg: $ groupmod –g 520 carma * Change the groupname for an existing group.Fo r eg: to change the group name carma to carma1, use the commandline below. $ groupmod –n carma1 carma

Page 35: Linux-Training-Volume1

10. Setting Group Password and manipulating Users ’ Groups The password for a group can be set or changed usin g the ‘gpasswd’ command. The group password for the user carma can be set us ing the commandline below. $ gpasswd carma * The password for the group ‘carma’ will b e set inside the file /etc/gshadow.In normal cases, there is no group pas sword set for any of the groups on a linux machine. * This command can also be used to delete or ad d users belonging to a specific group.The commandline below will add carma to the group ‘nobody’ $ gpasswd –a <user> <group> $ gpasswd –a carma nobody * Similarly, the commandline below will delete the user carma from the group nobody $ gpasswd –d <user> <group> $ gpasswd –d carma nobody 10. Finding out a User’s Group The ‘groups’ command can be used to print the g roup to which a user belongs to. $ groups carma 2.3.2. Some basic linux commands 1) ls : The "ls" (list) command lists the contents of the current directory. When used from a terminal, it generally uses colour s to differentiate between directories, images, executable files etc. And the prompt reappears at the end.

Page 36: Linux-Training-Volume1

Try out the following variations of the ls command, to see different forms of output: $ ls -l Produces a "long format" directory listing. For eac h file or directory, it also shows the owner, group, size, date modified and per missions $ ls -a Lists all the files in the directory, including hid den ones. In Linux, files that start with a period (.) are usually not shown. $ ls -R Lists the contents of each subdirectory, their subd irectories etc (recursive). With the "ls" command, if you don’t specify any p arameter, it will list the contents of the current directory. However, you cou ld instead give it a parameter specifying what to list. For example if y ou type in "ls /usr", it will list the contents of the "/usr" directory 2. man : Almost every command in Linux has onlin e help available from the command line, through the "man" (manual) command. Type in " man ls". The resulting page will describe the command, then describe every opti on, then give further details about the program, the author, and so on. $ man ls 3. info : Another source of online help is the " info" command. Some Linux commands may supply both "man" and "info" documenta tion. As a general rule, "info" documentation is more verbose and descriptiv e, like a user guide, while "man" documentation is more like a reference manual , giving lists of options and parameters, and the meaning of each. $ info ls The method for moving around in "info" is quite sim ilar to "man" - you can also use the arrows and PgUp/PgDn to move, and Q to quit . 4.

Page 37: Linux-Training-Volume1

–help : Most (but not all) programs have a --help option which displays a very short description of its main options and pa rameters. $ ls –help 5. date : Displays the current date and time or changes the system date and time to the specified value. $ date To set the date and time to “Sun Oct 6 16:55:16†� , use the syntax $ date –set='Sun Oct 6 16:55:16 EDT 2002' 5. cal : The 'cal' command displays a simple cal endar and if no arguments are specified , the current month is displayed. $ cal $ cal -y 7. who : The who command displays info about the users currently logged unto the system and displays the following information : login name, terminal line, login time, remote hostname or X display. $ who $ who -m , who -u , who -H 8. who am i : Displaying info about yourself. This command displays your login name, terminal nam e , date and time of login. $ who am i 9. tty : Knowing your terminal The tty(teletype) command displays the name of the terminal you are working on. $ tty

Page 38: Linux-Training-Volume1

10. cd : cd is the command for moving around in t he directory structure , which is short for ``change directory''. $ cd /home/carma * Using cd with no argument will return y ou to your own home directory. * To move back up to the next higher (or parent) directory, use the command "cd .." 11. pwd : The pwd command displays the absolute p athname of the present working directory. $ pwd 12. mkdir : Creates a directory under the current working directory or in the path specified. $ mkdir /root/sample 13. rmdir : Removes the specified directory and t he directory to be removed should not be under the current working directory.N ote that rmdir deletes a directory, but only if the directory is empty $ rmdir /root/sample 14. cp : The cp command copies the files listed o n the command line to the file or directory given as the last argument. Notic e that we use “.'' to refer to the current directory. $ cp /etc/shells . $ cp /home/carma/test /root/test

Page 39: Linux-Training-Volume1

15. mv: The mv command moves files, rather than c opying them. Note that it actually renames the file or folder. $ mv /home/carma/test /home/carma/testfolder 16. rm : The rm command is used to a delete a fil e and stands for "remove". $ rm file1 file2 To delete files recursively and forcefully from a d irectory , you can use $ rm -rf /home/carma/testfolder 17. more : The more command is used for viewing t he contents of files one screenful at a time. While using more, press Space to display the next page of text, and b to display the previous page. There are other commands available in more as well, these are just the basics. Pressing q will quit more. $ more /etc/services 18. file : Displays the file-type by examining it s contents, with a very high degree of accuracy. The type of file like ASCII etc . $ file filename 19. locate : Locate file-or-directory-name search es for a file or directory in the entire hard disk and displays all the places it ’s found. You can also specify a partial name or a section of the entire p ath. $ locate cron 20. cat : cat reads data from all of the files sp ecified by the command line, and sends this data directly to stdout. Therefore, using the command you can view the contents of a text file from the command l ine, without having to invoke an editor. Cat is short for "concatenate" and you c an use it with the -n option, which prints the file contents with numbered output lines. $ cat /root/test

Page 40: Linux-Training-Volume1

$ cat -50 /var/log/messages 21. touch : ‘touch filename’ change the date/ time stamp of the file to the current time.Or it will create an empty file if the file does not exist. $ touch /home/carma/testfile You can change the stamp to any date using touch. $ touch -t 200501311759.30 (year 2005 January day 3 1 time 17:59:30). There are three date/time values associated with ev ery file on an ext2 filesystem: - the time of last access to the file (atime) - the time of last modification to the file (mtime) - the time of last change to the file's inode (ctim e). Touch will change the first two of the value specif ied, and the last one always to the current system time. 21. tail : The tail command may be used to view t he end of a file and you can specify the number of lines you want to view. If no number is specified, it will output the last 10 lines by default. $ tail /var/log/messages $ tail -100 /var/log/messages $ tail -f /var/log/messages ( The "-f" option indic ates "Don't quit at the end of file; "follow" file as it grows and end when the user presses Ctrl-c"). 23. head : head prints the beginning of a text fi le to standard putput. $ head /var/log/messages – Prints the first 10 li nes of /var/log/messages. $ head -100 /var/log/messages - Prints first 100 li nes instead of first 10. 24. last : Using last you can find out who has re cently used the system, which terminals they used, and when they logged in and ou t.

Page 41: Linux-Training-Volume1

$ last To find out when a particular user last logged in t o the system, give his username as an argument $ last carma NOTE: The last tool gets its data from the system f ile `/var/log/wtmp'; the last line of output tells how far this file goes back. S ometimes, the output will go back for several weeks or more. 24. chsh : chsh command is used to change a users ’ login shell. chsh will accept the full pathname of any executable file on the system. However, it will issue a warning if the shell is not listed in the / etc/shells file. A sample chsh session is given below which changes the shell for the user carma to /bin/bash. $ chsh carma Changing shell for carma. New shell [/usr/local/cpanel/bin/noshell]: /bin/bas h Shell changed. 24. lynx : lynx is a text based browser for acces sing the web pages on the internet from the linux command line interface. The general syntax for accessing the yahoo website using lynx is given below. $ lynx http://www.yahoo.com 24. w : An extension of the who command that disp lays details of all users currently on the server. This is a very important s ystem admin tool to track who is on the server and what processes they are runnin g. The default setting for the w command is to show th e long list of process details. You can also run the command w -s to revie w a shorter process listing, which is helpful when you have a lot of users on th e server. $ w $ w -s 24.

Page 42: Linux-Training-Volume1

wget : Wget is a free utility for non-interac tive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols , as well as retrieval through HTTP proxies. $ wget http://mirrors.ccs.neu.edu/Apache/httpd/http d-2.0.54.tar.gz 24. su : Set User command is used to change the e ffective user id and group id to that of another USER. It thereby allows one user to temporarily become another user. If no USER is given, the default is ` root', the super-user. If USER has a password, `su' prompts for the passwo rd unless run by a user with effective user id of zero (the super-user) $ su OR $ su root ( To change to the root user ) $ su carma 2.4. The X Window System * The X Window System, commonly called "X," is a graphical windowing interface that comes with all popular Linux distrib utions. * X is available for many Unix-based operating systems; the version of X that runs on Linux systems with x86-based CPUs is c alled "XFree86." The current version of X is 11, Revision 6 -- or "X11R6." * All the command-line tools and most of the ap plications that you can run in the console can run in X; also available are num erous applications written specifically for X. 2.4.1. Running X When you start X, you should see a mouse pointer ap pear on the screen as a large, black "X." If your X is configured to start any tools or applications, they should each start and appear in individual win dows. *

Page 43: Linux-Training-Volume1

In X, each program or application in X runs i n its own window. Each window has a decorative border on all four sides, called t he window border; L-shaped corners, called frames; a top window bar, called th e title bar, which displays the name of the window; and several title bar butto ns on the left and right sides of the title bar . * The entire visible work area, including the r oot window and any other windows, is called the desktop. The box in the lowe r right-hand corner, called the pager, allows you to move about a large desktop . * A window manager controls the way windows loo k and are displayed -- the window dressing, as it were -- and can provide some additional menu or program management capabilities. There are many different w indow managers to choose from, with a variety of features and capabilities. * Window managers typically allow you to custom ize the colors and borders that are used to display a window, as well as the t ype and location of buttons that appear on the window. * And recently, desktop environments have becom e popular. These are a collection of applications that run on top of the w indow manager (and X), with the purpose of giving your X session a standardized "look and feel"; these suites normally come with a few basic tools such as clocks and file managers. * The two popular ones are GNOME and KDE, and t hey generate a lot of press these days because of their graphical nature. 2.4.1.1). Starting X There are two ways to start X. Some systems run the X Display Manager, xdm, when the system boots, at which point a graphical xdm lo gin screen appears; you can use this to log in directly to an X session. On sys tems not running xdm, the virtual console reserved for X will be blank until you start X by running the startx command. * To start X from a virtual console, type:

Page 44: Linux-Training-Volume1

$ startx * To run startx and redirect its output to a lo g file, type: $ startx >$HOME/startx.log 2>&1 [RET] * Both of these examples start X on the seventh virtual console, regardless of which console you are at when you run the comman d -- your console switches to X automatically. * You can always switch to another console duri ng your X session (using Alt-Ctrl-F1, Alt-Ctrl-F2 etc upto Alt-Ctrl-F6). The sec ond example writes any error messages or output of startx to a file called `star tx.log' in your home directory. * On some systems, X starts with 8-bit color de pth by default. Use startx with the special `-bpp' option to specify the color depth. Follow the option with a number indicating the color depth to use, an d precede the option with two hyphen characters (`--'), which tells startx to pas s the options which follow it to the X server itself. * To start X from a virtual console, and specif y 16-bit color depth, type: $ startx -- -bpp 16 [RET] 2.4.1.2). Stopping X * To end an X session, you normally choose an e xit X option from a menu in your window manager. * If you started your X session with startx, th ese commands will return you to a shell prompt in the virtual console where the command was typed. If, on the

Page 45: Linux-Training-Volume1

other hand, you started your X session by logging i n to xdm on the seventh virtual console, you will be logged out of the X se ssion and the xdm login screen will appear; you can then switch to another virtual console or log in to X again. * To exit X immediately and terminate all X pro cesses, press the [CTRL]-[ALT]-[BKSP] combination. You'll lose any unsaved a pplication data, but this is useful when you cannot exit your X session normally -- in the case of a system freeze or other problem. 2.4.2. Running a Program in X * Programs running in an X session are called X clients. (The X Window System itself is called the X server). * To run a program in X, you start it as an X c lient -- either by selecting it from a menu, or by typing the command to run in an xterm shell window (see Running a Shell in X). * To run an X client from the start menu, click the left mouse button to select the client's name from the submenus. * You can also start a client by running it fro m a shell window -- useful for starting a client that isn't on the menu, or fo r when you want to specify options or arguments. When you run an X client from a shell window, it opens in its own window; run the client in the background to free the shell prompt in the shell window. * To run a digital clock from a shell window or the opera web browser , type $ xclock -digital & $ opera &

Page 46: Linux-Training-Volume1

2.4.3. Command Line Options to X Client 2.4.3.1). Specifying Window Size and Location * Specify a window's size and location by givin g its window geometry with the `geometry' option. Four fields control the widt h and height of the windows, and the window's distance ("offset") from the edge of the screen. It is specified in the form: -geometry WIDTHxHEIGHT+XOFF+YOFF * To start a small xclock, 48 pixels wide and 4 8 pixels high, type: $ xclock -geometry 48x48 * To start an xclock with a width of 48 pixels and the default height, type: $ xclock -geometry 48 * To start an xclock with a height of 48 pixels and the default width, type: $ xclock -geometry x48 * You can give positive or negative numbers for the XOFF and YOFF fields. Positive XOFF values specify a position from the le ft of the screen; negative values specify a position from the right. If YOFF i s positive, it specifies a position from the top of the screen; if negative, i t specifies a position from the bottom of the screen. When giving these offsets , you must specify values for both XOFF and YOFF. * To start an xclock with a width of 120 pixels , a height of 100 pixels, an x offset of 250 pixels from the right side of the s creen, and a y offset of 25 pixels from the top of the screen, type: $ xclock -geometry 120x100-250+25 2.4.3.2). Specifying Window Colors

Page 47: Linux-Training-Volume1

The window colors available in your X session depen d on your display hardware and the X server that is running. The xcolors tool will show all colors available on your X server and the names used to sp ecify them. * To list the available colors, type: $ xcolors [RET] Press [Q] to exit xcolors. * To specify a color to use for the window back ground, window border, and text or graphics in the window itself, give the col or name as an argument to the appropriate option: `-bg' for background color, `-b d' for window border color, and `-fg' for foreground color. * To start an xclock with a light blue window b ackground, type: $ xclock -bg lightblue [RET] 2.4.3.3). Running a Shell in X * Use xterm to run a shell in a window. You can run commands in an xterm window just as you would in a virtual console; a sh ell in an xterm acts the same as a shell in a virtual console. * Unlike a shell in a console, you can cut and paste text from an xterm to another X client (see Selecting Text). * To scroll through text that has scrolled past the top of the screen, type [Shift]-[PgUp]. The number of lines you can scroll back to depends on the value of the scrollback buffer, specified with the `-sl' option; its default value is 64.

Page 48: Linux-Training-Volume1

* NOTE: xterm is probably the most popular term inal emulator X client, but it is not the only one; others to choose from inclu de wterm and rxvt, all with their own special features -- try them all to find one you like. 3. FILE MANIPULATION AND MANAGEMENT 3.1. Files and Directories 3.1.1. Naming Files and Directories * File names can consist of upper and lowercase letters, numbers, periods (`.'), hyphens (`-'), and underscores (`_').File na mes are also case sensitive. Directory names follow the same conventions as used with files. * Linux does not force you to use file extensio ns, but it is convenient and useful to give files proper extensions, since they will help you to identify file types at a glance. * Some commonly used file extensions are .html, .jpg, .xml, .php , .cgi , .pl , .gz 3.1.2. Making an Empty File/Directory * You can create an empty file using the touch command. If a file does not exist, it creates it. $ touch newfile * You can use mkdir to make a new directory giv ing the path name of the new directory as an argument. $ mkdir /home/carma/public_html/test123 * You can make a directory tree using mkdir wit h the '-p' option. $ mkdir -p work/support/security

Page 49: Linux-Training-Volume1

This makes a `security' subdirectory in the directo ry called `support', which in turn is in a directory called `work' in the current directory; if the `support' or the `work' directories do not already exist, the y are made as well. 3.1.3. Changing Directories * You can change directories using the cd comma nd. $ cd /home/carma * Using just "cd" will take you to your home di rectory. $ cd * Use "cd -" to return to the directory you wer e last in, $ cd - * Every directory has two special files whose n ames consist of one and two periods. `..' refers to the parent of the current w orking directory, and `.' refers to the current working directory itself. If the current working directory is `/home/carma', you can use `.' to specify `/home /carma' and `..' to specify `/home'. Furthermore, you can specify the `/home/te st' directory as ../test. 3.2. File Permissions 3.2.1. Concept of File Permissions and Ownership Because there is typically more than one user on a Linux system, Linux provides a mechanism known as file permissions, which protec t user files from tampering by other users. This mechanism lets files and direc tories be “owned'' by a particular user. For example, because the user Carm a created the files in his home directory, Carma owns those files and has acce ss to them. * Sharing files between Groups : Linux also let s files be shared between users and groups of users. If Carma desired, he cou ld cut off access to his files so that no other user could access them. Howe ver, on most systems the default is to allow other users to read your files but not modify or delete them in any way.

Page 50: Linux-Training-Volume1

* Every file is owned by a particular user. How ever, files are also owned by a particular group, which is a defined group of use rs of the system. * Every user is placed into at least one group when that user's account is created. However, the system administrator may gran t the user access to more than one group. * User Groups: Groups are usually defined by th e type of users who access the machine. For example, on a university Linux sys tem users may be placed into the groups student, staff, faculty or guest. There are also a few system-defined groups (like wheel and admin) which are used by the system itself to control access to resources--very rarely do actual users be long to these system groups. Each member of a group can work with the group's fi les and make new files that belong to the group. The system administrator can a dd new groups and give users membership to the different groups. * File permissions fall into three main divisio ns: read, write, and execute. These permissions may be granted to three classes o f users: (1) the owner of the file, (2) the group to which the file belongs, and (3) to all users, regardless of group. * Read permission lets a user read the contents of the file, or in the case of directories, list the contents of the directory (using ls). * Write permission lets the user write to and m odify the file. For directories, write permission lets the user create new files or delete files within that directory. * Finally, execute permission lets the user run the file as a program or shell script (if the file is a program or shell scr ipt). For directories, having execute permission lets the user cd into the direct ory in question. 3.2.2. Interpreting file permissions

Page 51: Linux-Training-Volume1

Using the ls command with the -l option displays a ``long'' listing of the file, including file permissions. $ ls -l testfile -rw-r--r-- 1 carma users 505 Mar 13 19:05 testfile The first field in the listing represents the file permissions. The third field is the owner of the file (carma ) and the fourth fi eld is the group to which the file belongs (users). Obviously, the last field is the name of the file (testfile). We'll cover the other fields later. * This file is owned by carma, and belongs to t he group users. The string -rw-r--r-- lists, in order, the permissions granted to the file's owner, the file's group, and everybody else. * The first character of the permissions string (``-'') represents the type of file. A “-'' means that this is a regular file (as opposed to a directory which is denoted by d or device driver). * The next three characters (``rw-'') represent the permissions granted to the file's owner, carma. The ``r'' stands for ``rea d'' and the ``w'' stands for ``write''. Thus, carma has read and write permissio n to the file testfile. * The next three characters, (“r--''), repres ent the group's permissions on the file. The group that owns this file is users . Because only an ``r'' appears here, any user who belongs to the group use rs may read this file. * The last three characters, also (“r--''), r epresent the permissions granted to every other user on the system (other th an the owner of the file and those in the group users). Again, because only an ` `r'' is present, other users may read the file, but not write to it or execute i t. Here are some other examples of permissions:

Page 52: Linux-Training-Volume1

3.2.3. File Permission Dependencies The permissions granted to a file also depend on th e permissions of the directory in which the file is located. For example , even if a file is set to -rwxrwxrwx, other users cannot access the file unles s they have read and execute access to the directory in which the file is locate d. * For example, if Carma wanted to restrict acce ss to all of his files, he could set the permissions to his home directory /ho me/carma to -rwx------. In this way, no other user has access to his directory , and all files and directories within it. Carma doesn't need to worry about the individual permissions on each of his files. * In short, to access a file at all, you must h ave execute access to all directories along the file's pathname, and read (or execute) access to the file itself. * Default permissions : The default set of perm issions given to files is -rw-r—r—which depends on the umask of that direc tory as discussed in the section below. And ,the usual set of permissions gi ven to directories is drwxr-xr-x, which lets other users look through your dire ctories, but not create or delete files within them. 3.2.3.1). User file-creation mode mask * The umask (UNIX shorthand for "user file-crea tion mode mask") is a four-digit octal number that UNIX uses to determine the file permission for newly created files. * The umask specifies the permissions you do no t want given by default to newly created files and directories. * Depending on the umask value of a directory, the permissions of a file or directory created under it can vary. * How umask is used to set and determine the de fault file creation permissions on the system is explained below. o

Page 53: Linux-Training-Volume1

Default permissions are: 777 - Executab le files , 666 - Text file. o The permission for the creation of new executable files is calculated by subtracting the umask value from the default permission value for the file type being created. o An example for a text file is shown bel ow with a umask value of 022: 666 Default Permission for text file -022 Minus the umask value ---- 644 Allowed Permissions o Similary for a directory, the default p ermission will be 755 as calculated below: 777 – 022 (Umask value) = 755 * The commandline to set the umask on a directo ry is: $ umask 022 * The most common umask setting is 022. The /et c/profile script is where the umask command is usually set for all users. 3.2.4. Changing permissions The command chmod is used to set the permissions on a file. Only the owner of a file or the root user may change the permissions on that file. The syntax of chmod is chmod {a,u,g,o}{+,i}{r,w,x} filenames Briefly, you first specify one or more of all, user , group, or other. Then you specify whether you are adding rights (+) or taking them away (-). Finally, you specify one or more of read, write, and execute.

Page 54: Linux-Training-Volume1

Some sample commands are given below: There is another way in which you can specify the f ile permissions. The permission bits r,w and x are assigned a number. r = 4 ,w = 2 , x = 1 Now you can use numbers, which are the sum of the v arious permission bits. E.g - rwx will be 4+3+1 = 7. rx becomes 4+1 = 5. Th e chmod command now becomes $chmod xyz filename where x,y and z are numbers representing the permis sions of user, group and others respectively. Each number is the sum of the permissions to be set and are calculated as given above. $ chmod 644 testfile 6 = 4 + 2 = rw , 4 = r ,4 = r 3.2.5. Understanding File Permissions Beyond "rwx" 3.2.5.1). 's' bit or 'Set User ID'/ SUID and 'Set G roup ID' / SGID 'Set User ID'/ SUID bit a) How to recognise it : If we change the permissio ns of a file to 4777 and list it back (in long format) the permissions will be sh own as "-rwsrwxrwx". We can now see that the SUID bit for this file has been se t by the presence of the "s". That's fine , but now we can't tell if the user exe cute bit is set, can we? Well actually, the case gives it away. A lower case "s" means that the execute bit is set, an upper case "S" means that it is clear. If w e change the permissions of our file to 4677 the permissions will be shown as "-rwSrwxrwx". b) What is it for? The SUID bit only comes into play if the file has e xecute permission. When such a file is executed, the resulting process takes on the effective user ID of the owner of that file . For example, say we have a program file owned by us er "carma" with permissions "rwsrwxrwx". This file can be run by any user, howe ver, the resulting process will have all the same access capabilities as carma . If it so chooses, it can read all the files that carma can read, it can writ e to all the files that carma can write to, and it can execute all the files that carma can execute.

Page 55: Linux-Training-Volume1

c) How to set it ? $ chmod 4nnn <filename> Or $ chmod u+s <filename> d) Some points worth remembering. 1) Only make a file a root owned SUID if it absolut ely has to be. 2. Keep up-to-date with the security fixes. "Set Group ID" or SGID bit a) How to recognise it : A file with permissions se t to 2777 will be displayed as "-rwxrwsrwx". As before, a lower case "s" signif ies that the group execute bit is set. b) What is it for? On executable files, SGID has similar function as S UID, but as you might expect, the resulting process takes on the effective group ID of that of the file. When applied to directories, SGID takes on a special mea ning. Any files created in such a directory will take on the same group ID as that of the directory, regardless of the group ID of the user creating the file. For example, let's say we have a directory with per missions "drwxrwsrwx" owned by the group "rockers" and a user belonging to the group "carma" (we are talking about the user's main group ID here) comes along an d creates a file in this directory. The resulting file will have a group ID of "rockers", not "carma" as would be the case in a normal directory. On non-exe cutable files and non-directories, the SGID bit has no effect. c) How to Set it? It can be set as follows: chmod 2nnn <filename> ie chmod 2755 /root/testdir

Page 56: Linux-Training-Volume1

or chmod g+s <filename> 3.2.5.2). 't' bit or 'Sticky' bit : a) How to recognise it : A file with permissions se t to 1777 will be displayed as "-rwxrwsrwt". A lower case "t" signifies that th e other execute bit is set. b) What is it for? On Linux systems, the sticky bit only has an effect when applied to directories. A directory with this bit set will allow users to b e able to rename or remove only those files which they own within that directo ry (other directory permissions permitting). It is usually found on tmp directories and prevents users from tampering with one another's files. c) How to set it ? The sticky bit can be set as follows: chmod 1nnn <filename> ie chmod 1755 /root/testfile.html or chmod +t <filename> 3.2.5.3). The Other Mysterious Letters - "d", "l", "b", "c", "p" You may have come across these little fellows in yo ur travel through your file system. Here is just a brief explanation on each of them. * d - Example "drwxrwxrwx". You probably haven' t managed to get this far without knowing that this is a directory. I mention it here for completeness. * l - Example "lrwxrwxrwx". This is a symbolic link. A symbolic link is a file that links to another file and can be used as an alternative way of accessing that file. The permissions on a symbolic link are irrelevant as it is the permissions on the target file that count. * b and c - Examples "brwxrwxrwx" and "crwxrwxr wx". These are found on special files called device files, located in the / dev directory (although there

Page 57: Linux-Training-Volume1

is nothing to stop them from being created elsewher e). "b" refers to block devices (such as hard drives), "c" refers to charac ter devices (such as printers). * p - Example "pwrxrwxrwx". This is a special t ype of file called a "pipe". It allows two processes to pass data - one places d ata into the pipe, the other takes it out. This type of named pipe file is not o ften used. 3.2.5.4). Setting SUID, SGID, sticky bit on a singl e file As with read, write, execute permissions, it is pos sible to mix and match SUID, SGID and sticky bit settings when using the octal s tyle parameter to chmod. An extreme example would be: $ chmod 7777 myfile but there you have it, that's a file with all bits set . # ls -la myfile -rwsrwsrwt 1 root root 0 Feb 26 16:39 myfile 3.3. Managing file links Links let you give a single file more than one name . Files are actually identified by the system by their inode number, whi ch is just the unique file system identifier for the file. A directory is actu ally a listing of inode numbers with their corresponding filenames. Each fi lename in a directory is a link to a particular inode. 3.3.1. Hard links The ln command is used to create multiple links for one file. For example, let's say that you have a file called foo in a directory. Using ls -i, you can look at the inode number for this file. $ ls -i foo 639098 foo foo has an inode number of 639098 in the file syste m.

Page 58: Linux-Training-Volume1

* You can create another link to foo, named foo link as follows: $ ln foo foolink * With ls -i, you check the inodes for these tw o files and you will see that they have the same inode. $ ls -i foolink 639098 foolink Now, specifying either foo or foolink will access t he same file. If you make changes to foo, those changes appear in foolink as well. For all purposes, foo and foolink are the same file. * These links are known as hard links because t hey create a direct link to an inode. Note that you can hard-link files only wh en they're on the same file system; symbolic links (explained) don't have this restriction. * When you delete a file with rm, you are actua lly only deleting one link to a file. If you use the command $ rm foo then only the link named foo is deleted, foolink wi ll still exist. A file is only truly deleted on the system when it has no lin ks to it. Usually, files have only one link, so using the rm command deletes the file. However, if a file has multiple links to it, using rm will delete only a s ingle link; in order to delete the file, you must delete all links to the f ile. * The command ls -l displays the number of link s to a file . The second column in the listing, ``2'', specifies the number of links to the file. $ ls -l foo foolink -rw-rw-r-- 2 carma carma 0 Feb 26 13:11 foo -rw-rw-r-- 2 carma carma 0 Feb 26 13:11 foolink *

Page 59: Linux-Training-Volume1

If you do 'ls -lad' on a directory and even i f a directory is empty, it will show that there are 2 links present inside it. This is because every directory contains at least two hard links: “.'' (a link pointing to itself), and “..'' (a link pointing to the parent director y). The root directory (/) “..'' link just points back to /. (In other words , the parent of the root directory is the root directory itself.) $ ls -lad testfile/ drwxrwxr-x 2 carma carma 4096 Feb 26 13:22 testfile / 3.3.2. Symbolic Links Symbolic links, or symlinks, are another type of li nk, which are different from hard links. A symbolic link lets you give a file an other name, but doesn't link the file by inode. The command ln -s creates a symbolic link to a file $ ln -s foo foolink This will create a symbolic link named foolink that points to the file foo. $ ls -i foo foolink 639098 foo 639098 foolink You can see that the two files have the same inodes indeed. * Using ls -l, we see that the file foolink is a symlink pointing to foo. $ ls -l foo foolink -rw-rw-r-- 1 carma carma 0 Feb 26 13:11 foo lrwxrwxrwx 1 carma carma 3 Feb 26 14:54 foolink -> foo * The file permissions on a symbolic link are n ot used (they always appear as rwxrwxrwx). Instead, the permissions on the symb olic link are determined by the permissions on the target of the symbolic link (in our example, the file foo). *

Page 60: Linux-Training-Volume1

Functionally, hard links and symbolic links a re similar, but there are differences. For one thing, you can create a symbol ic link to a file that doesn't exist; the same is not true for hard links. Symbolic links are processed by the kernel differently than are hard links, whic h is just a technical difference but sometimes an important one. Symbolic links are helpful because they identify the file they point to; with hard lin ks, there is no easy way to determine which files are linked to the same inode. 3.4. File ownership and Attributes Every file belongs to both a user and a group -- us ually to the user who created it and to the group the user was working in at the time (which is almost always the user's login group). File ownership determines the type of access users have to particular files. 3.4.1. Determining the Ownership of a File Use ls with the `-l' option to list the owner and g roup name for a file. The name of the user who owns the file appears in the t hird column of the output, and the name of the group that owns the file appear s in the fourth column as we had already discussed in our previous sections. $ ls -l 3.4.2. Changing the Ownership of a File * To change the ownership of the file, use the chown command. $ chown root testfile * To change the group ownership of file `testfi le' to root , use $ chgrp root testfile * Using the `-R' option, you can recursively ch ange the ownership of directories and all of their contents inside it. $ chown -R root testdir $ chgrp -R root testdir $ chown -R root.root testdir 3.4.3. Determing the advanced attributes of a file

Page 61: Linux-Training-Volume1

lsattr lists the advanced file attributes on a seco nd extended filesystem. On an ext2 file system, it is possible to use ext2 attrib utes to protect things. Some of the attributes are given below. * ‘append-only' or 'a' attribute: A file with this attribute may be appended to, but may not be deleted, and the existi ng contents of the file may not be overwritten. If a directory has this attribu te, any files or directories within it may be modified as normal, but no files m ay be deleted. * `immutable' or 'i' attribute : This attribute can only be set or cleared by root. A file or directory with this attribute ma y not be modified, deleted, renamed, or (hard) linked. * 'undeletable' or 'u' attribute : If a file wi th that attribute is deleted, instead of actually being reused, it is merely move d to a `safe location' for deletion at a later date. Please go through "man chattr" for finding out more about the attributes that can be set. # lsattr test.html ----ia------- test.html You can see that the file test.html has the immutab le and append-only attribute set on it. 3.4.4. Changing advanced Attributes of a File The attributes set on a file can be manipulated usi ng the 'chattr' command. Please note that you need to be the root user to ch ange the attribute on a file. * 'a' attribute or append-only attribute can be set using $chattr +a /root/testfile or can be removed using $ chattr -a /root/testfile

Page 62: Linux-Training-Volume1

* 'i' or immutable attribute can be set using $chattr +i /root/testfile or can be removed using $ chattr -i /root/testfile * 'chattr -R' recursively changes attributes of directories and their contents. Symbolic links encountered during recursi ve directory traversals are ignored. $ chattr -R +ia /root/testdir --ïƒ sets i and a at tributes on the directory /root/testdir and all contents inside it. 3.5. Finding Files Sometimes you will need to find files on the system that match a given criteria, such as name and file size. This section will show you how to find a file when you know only part of the file name, and how to fin d a file whose name matches a given pattern. You will also learn how to list file s and directories by their size and to find the location of commands. 3.5.1. Finding All Files That Match a Pattern * The simplest way to find files is with the lo cate command. locate outputs a list of all files on the system that match the pa ttern, giving their full path name. For example, all files with the text `audio' somewh ere in their full path name, or all files ending with `ron'. * To find all the files on the system that have the text `audio' anywhere in their name, type: $ locate audio * To find all the files on the system whose fil e names end with the text `ron', type: $ locate *ron

Page 63: Linux-Training-Volume1

* To find all hidden "dotfiles" on the system, type: $ locate /. NOTE: locate searches are not case sensitive. 3.5.2. Finding Files in a Directory Tree The 'find' command can be used to find specific fil es in a particular directory tree, specifying the name of the directory tree to search, the criteria to match, and -- optionally -- the action to perform o n the found files. You can specify a number of search criteria, and fo rmat the output in various ways; the following sections include recipes for th e most commonly used find commands, as well as a list of find's most popular options. 3.5.2.1). Finding Files in a Directory Tree by Name Use find to find files in a directory tree by name. Give the name of the directory tree to search through, and use the `-nam e' option followed by the name you want to find. * To list all files on the system whose file na me is `top', type: $ find / -name top This command will search all directories on the sys tem to which you have access; if you don't have execute permission for a director y, find will report that permission is denied to search the directory. * The `-name' option is case sensitive; use the similar `-iname' option to find name regardless of case. $ find / -iname top * To list all files in your home directory tree that end in `.php', regardless of case, type: $ find ~ -iname '*.php'

Page 64: Linux-Training-Volume1

* To list all files in the `/usr/share' directo ry tree with the text `lib' somewhere in their name, type: $ find /usr/share -name '*lib*' * Use `-regex' in place of `-name' to search fo r files whose names match a regular expression, or a pattern describing a set o f strings. To list all files in the current directory tree whose names have eith er the string `net' or `comm' anywhere in their file names, type: $ find ./ -regex '.*\(net\|comm\).*' 3.5.2.2). Finding Files in a Directory Tree by Size To find files of a certain size, use the `-size' op tion, following it with the file size to match. The file size takes one of thre e forms: * when preceded with a plus sign (`+'), it matc hes all files greater than the given size; * when preceded with a hyphen or minus sign (`- '), it matches all files less than the given size; * with neither prefix, it matches all files who se size is exactly as specified. (The default unit is 512-byte blocks; fo llow the size with `k' to denote kilobytes or `b' to denote bytes.) Examples : * To list all files in the `/usr/local' directo ry tree that are greater than 10,000 kilobytes in size, type: $ find /usr/local -size +10000k *

Page 65: Linux-Training-Volume1

To list all files in your home directory tree less than 300 bytes in size, type: $ find ~ -size -300b * To list all files on the system whose size is exactly 42 512-byte blocks, type: $ find / -size 42 * Use the `-empty' option to find empty files - - files whose size is 0 bytes. This is useful for finding files that you mi ght not need, and can remove. To find all empty files in your home directory tree , type: $ find ~ -empty 3.5.2.3). Finding Files in a Directory Tree by Modi fication Time To find files last modified during a specified time , use find with the `-mtime' or `-mmin' options; the argument you give with `-mt ime' specifies the number of 24-hour periods, and with `-mmin' it specifies the number of minutes. * To list the files in the `/usr/local' directo ry tree that were modified exactly 24 hours ago, type: $ find /usr/local -mtime 1 * To list the files in the `/usr' directory tre e that were modified exactly five minutes ago, type: $ find /usr -mmin 5 * To list the files in the `/usr/local' directo ry tree that were modified within the past 24 hours, type:

Page 66: Linux-Training-Volume1

$ find /usr/local -mtime -1 * To find files in the `/etc' directory tree th at are newer than the file `/etc/motd', type: $ find /etc -newer /etc/motd 3.5.2.4). Finding Files in a Directory Tree by Owne r To find files owned by a particular user, give the username to search for as an argument to the `-user' option. * To list all files in the `/usr/local/fonts' d irectory tree owned by the user carma, type: $ find /usr/local/fonts -user carma * The `-group' option is similar, but it matche s group ownership instead of user ownership. To list all files in the `/dev' dir ectory tree owned by the audio group, type: $ find /dev -group audio 3.5.2.5) Running Commands on the Files You Find You can also use find to execute a command you spec ify on each found file, by giving the command as an argument to the `-exec' op tion. If you use the string “{}'' in the command, this string is replaced wit h the file name of the current found file when the command executes. Mark the end of the command with the string `';''. * To find all files in the `~/html/' directory tree with an `.html' extension, and output lines from these files that c ontain the string `organic', type: $ find ~/html/ -name '*.html' -exec grep organic '{ }' ';'

Page 67: Linux-Training-Volume1

3.5.3. Finding Files in Directory Listings 3.5.3.1). Finding the Largest Files in a Directory To find the largest files in a given directory, use ls to list its contents with the `-S' option, which sorts files in descending or der by their size (normally, ls outputs files sorted alphabetically). Include th e `-l' option to output the size and other file attributes. To list the files in the current directory, with th eir attributes, sorted with the largest files first, type: $ ls -lS 3.5.3.2). Finding the Smallest Files in a Directory To list the contents of a directory with the smalle st files first, use ls with both the `-S' and `-r' options, which reverses the sorting order of the listing. To list the files in the current directory and thei r attributes, sorted from smallest to largest, type: $ ls -lSr 3.5.3.3). Finding the Smallest Directories To output a list of directories sorted by their siz e -- the size of all the files they contain -- use du and sort. The du tool outputs directories in ascending order with the smallest first; the `-S' o ption puts the size in kilobytes of each directory in the first column of output. Give the directory tree you want to output as an op tion, and pipe the output to sort with the `-n' option, which sorts its input nu merically. To output a list of the subdirectories of the curre nt directory tree, sorted in ascending order by size, type: $ du -S . | sort -n 3.5.3.4). Finding the Largest Directories Use the `-r' option with sort to reverse the listin g and output the largest directories first.

Page 68: Linux-Training-Volume1

To output a list of the subdirectories in the curre nt directory tree, sorted in descending order by size, type: $ du -S . | sort -nr 3.5.3.5). Finding the Number of Files in a Listing To find the number of files in a directory, use ls and pipe the output to `wc -l', which outputs the number of lines in its input . To output the number of files in the current direct ory, type: $ ls | wc -l 3.5.4. Finding Where a Command Is Located Use 'which' to find the full path name of a tool or application from its base file name. * To find out whether perl is installed on your system, and, if so, where it resides, type: $ which perl /usr/bin/perl In this example, which output `/usr/bin/perl', indi cates that the perl binary is installed in the `/usr/bin' directory. * This is also useful for determining "which" b inary would execute, should you type the name, since some systems may have diff erent binaries of the same file name located in different directories. In that case, you can use which to find which one would execute. 3.6. Managing Files 3.6.1. Determining File Type and Format

Page 69: Linux-Training-Volume1

When we speak of a file's type, we are referring to the kind of data it contains, which may include text, executable comman ds, or some other data; this data is organized in a particular way in the file, and this organization is called its format. For example, an image file might contain data in the JPEG image format, or a text file might contain unformat ted text in the English language . The file tool analyzes files and indicates their ty pe and -- if known -- the format of the data they contain. Supply the name of a file as an argument to file and it outputs the name of the file, followed by a description of its format and type. $ file Kids.tar.gz Kids.tar.gz: gzip compressed data, was "Kids.tar", from Unix $ file gaim-1.1.1-0.src.rpm gaim-1.1.1-0.src.rpm: RPM v3 src i386 gaim-1.1.1-0 $ file testfile testfile: empty $ file xmas.gif xmas.gif: GIF image data, version 87a, 445 x 329 3.6.2. Changing File Modification Time Use to change a file's timestamp without modifying its contents. Give the name of the file to be changed as an argument. The defau lt action is to change the timestamp to the current time. * To change the timestamp of file `services' to the current date and time, type: $ touch services * To change the timestamp of file `services' to `17 May 1999 14:16', type: $ touch -d '17 May 1999 14:16' services

Page 70: Linux-Training-Volume1

* To change the timestamp of file `services' to `14 May', type: $ touch -d '14 May' services * To change the timestamp of file `services' to `14:16', type: $ touch -d '14:16' services NOTE: When only the date is given, the time is set to `0:00'; when no year is given, the current year is used. 3.6.3. Splitting a File into Smaller Ones It's sometimes necessary to split one file into a n umber of smaller ones. The split tool copies a file, chopping up the copy into separate files of a specified size. It takes as optional arguments the name of the input file (using standard input if none is given) and the file name prefix to use when writing the output files (using `x' if none is given). The output files' names will consist of the file prefix followed by a group of l etters: `aa', `ab', `ac', and so on -- the default output file names would be `xa a', `xab', and so on. * To split 'flash_player_linux.tar.gz' into sep arate files of 200K each, whose names begin with `flash.tar', type: $ split -b200k flash_player_linux.tar.gz flash.tar. gz $ ls -la total 1960 -rw-r--r-- 1 root root 204800 Feb 28 13:17 flash.ta r.gzaa -rw-r--r-- 1 root root 204800 Feb 28 13:17 flash.ta r.gzab -rw-r--r-- 1 root root 204800 Feb 28 13:17 flash.ta r.gzac -rw-r--r-- 1 root root 204800 Feb 28 13:17 flash.ta r.gzad -rw-r--r-- 1 root root 168252 Feb 28 13:17 flash.ta r.gzae -rw-rw-r-- 1 root root 987452 Dec 27 07:14 flash_pl ayer_linux.tar.gz

Page 71: Linux-Training-Volume1

3.6.4. Comparing Files There are a number of tools for comparing the conte nts of files in different ways; these recipes show how to use some of them. 3.6.4.1). Determining Whether Two Files Differ usin g 'cmp' Use cmp to determine whether or not two text files differ. It takes the names of two files as arguments, and if the files contain th e same data, cmp outputs nothing. If, however, the files differ, cmp outputs the byte position and line number in the files where the first difference occu rs. $ cmp testfile samplefile testfile samplefile differ: byte 2, line 1 3.6.4.2). Finding the Differences between Files usi ng 'diff' * Use 'diff' to compare two files and output a difference report containing the text that differs between two files.To compare two files and output a difference report, give their names as arguments to diff. Eg: $ diff testfile samplefile 1,2c1 < this is a test file < --- > testing !!!!!!!!!!!!! * To better see the difference between two file s, use sdiff instead of diff; instead of giving a difference report, it outputs t he files in two columns, side by side, separated by spaces. Lines that differ in the files are separated by

Page 72: Linux-Training-Volume1

`|'; lines that appear only in the first file end w ith a `<', and lines that appear only in the second file are preceded with a `>'. $ sdiff testfile samplefile 3.6.4.3). Patching a File with a Difference Report To apply the differences in a difference report to the original file compared in the report, use patch. It takes as arguments the na me of the file to be patched and the name of the difference report file (or "pat chfile"). It then applies the changes specified in the patchfile to the original file. This is especially useful for distributing different versions of a fil e -- small patchfiles may be sent across networks easier than large source files . * To update the original file `manuscript.new' with the patchfile `manuscript.diff', type: $ patch manuscript.new manuscript.diff * To update an entire directory with a patch fi le, use the syntax below $ patch -p1 < ../grsecurity.patch * The –p option specifies how much of precedi ng pathname to strip. A num of 0 strips everything, leaving just the filename. 1 strips the leading /. Each higher number after that strips another directory f rom the left. For Ex: if you have a patchfile with a header as su ch: +++ new/modules/kernel Tue Dec 19 20:05:41 2000 * Using a -p0 will expect, from your current wo rking directory, to find a subdirectory called "new", then "modules" below tha t, then the "kernel" file below that. *

Page 73: Linux-Training-Volume1

Using a -p1 will strip off the 1st level from the path and will expect to find (from your current working directory) a direct ory called "modules", then a file called "kernel". Patch will ignore the "new" d irectory mentioned in the header of the patchfile. * Using a -p2 will strip of the first two level s from the path. Patch will expect to find "kernel" in the current working dire ctory. Patch will ignore the "new" and "modules" directories mentioned in the he ader of the patchfile. 3.6.5. File Compression/Decompression File compression is useful for storing or transferr ing large files. When you compress a file, you shrink it and save disk space. File compression uses an algorithm to change the data in the file; to use th e data in a compressed file, you must first uncompress it to restore the origina l data (and original file size). 3.6.5.1). Compression/Decompression Tools In Red Hat Linux you can compress files with the co mpression tools gzip, bzip2, or zip. * The bzip2 compression tool is recommended bec ause it provides the most compression and is found on most UNIX-like operatin g systems. * The gzip compression tool can also be found o n most UNIX-like operating systems. * If you need to transfer files between Linux a nd other operating system such as MS Windows, you should use zip because it i s more compatible with the compression utilities on Windows. Compression Tool File Extension

Page 74: Linux-Training-Volume1

Uncompression Tool gzip .gz gunzip bzip2 .bz2 bunzip2 zip .zip unzip * By convention, files compressed with gzip are given the extension .gz, files compressed with bzip2 are given the extension .bz2, and files compressed with zip are given the extension .zip. * Files compressed with gzip are uncompressed w ith gunzip, files compressed with bzip2 are uncompressed with bunzip2, and files compressed with zip are uncompressed with unzip. Bzip2 and Bunzip2 To use bzip2 to compress a file, type the following command at a shell prompt:

Page 75: Linux-Training-Volume1

$ bzip2 filename The file will be compressed and saved as filename.b z2.To expand the compressed file, type the following command: $ bunzip2 filename.bz2 The filename.bz2 is deleted and replaced with filen ame.You can use bzip2 to compress multiple files and directories at the same time by listing them with a space between each one: $ bzip2 filename.bz2 file1 file2 file3 /usr/local/s hare The above command compresses file1, file2, file3, a nd the contents of the /usr/local/share directory (assuming this directory exists) and places them in a file named filename.bz2. Gzip and Gunzip * To use gzip to compress a file, type: $ gzip filename The file will be compressed and saved as filename.g z. * To expand the compressed file, type the comma nd: $ gunzip filename.gz The filename.gz is deleted and replaced with filena me. * To compress multiple files and directories at the same time by listing them with a space between each one: $ gzip -r filename.gz file1 file2 file3 /usr/local/ share The above command compresses file1, file2, file3, a nd the contents of the /usr/local/share directory (assuming this directory exists) and places them in a file named filename.gz.

Page 76: Linux-Training-Volume1

Zip and Unzip * To compress a file with zip, type the followi ng command: $ zip -r filename.zip filesdir filename.zip represents the file you are creating a nd filesdir represents the directory you want to put in the new zip file. The -r option specifies that you want to include all files contained in the filesdir directory recursively. * To extract the contents of a zip file, type t he following command: $ unzip filename.zip * You can use zip to compress multiple files an d directories at the same time by listing them with a space between each one: $ zip -r filename.zip file1 file2 file3 /usr/local/ share 3.6.5.2). Archiving Files at the Shell Prompt A tar file is a collection of several files and/or directories in one file. This is a good way to create backups and archives. Some of the options used with the tar command are: -c Create a new archive -f When used with the -c option, use the filename specified for the creation of the tar file; when used with the -x option, unarchi ve the specified file. -t show the list of files in the tar file.

Page 77: Linux-Training-Volume1

-v show the progress of the files being archived -x extract files from an archive. -z compress the tar file with gzip. -j — compress the tar file with bzip2. * To create a tar file, type: $ tar -cvf filename.tar directory/file * You can tar multiple files and directories at the same time by listing them with a space between each one: $ tar -cvf filename.tar /home/carma/public_html /ho me/carma/www The above command places all the files in the publi c_html and the www subdirectories of /home/carma in a new file called filename.tar in the current directory. * To list the contents of a tar file, type: $ tar -tvf filename.tar *

Page 78: Linux-Training-Volume1

To extract the contents of a tar file, type: $ tar -xvf filename.tar This command does not remove the tar file, but it p laces copies of its unarchived contents in the current working director y, preserving any directory structure that the archive file used. For example, if the tarfile contains a file called file.txt within a directory called foo/ , then extracting the archive file will result in the creation of the directory f oo/ in your current working directory with the file file.txt inside of it. * Remember, the tar command does not compress t he files by default. To create a tarred and bzipped compressed file, use th e -j option: $ tar -cjvf filename.tbz file * You can also expand and unarchive a bzip tar file in one command: $ tar -xjvf filename.tbz * To create a tarred and gzipped compressed fil e, use the -z option: $ tar -czvf filename.tgz file tar files compressed with gzip are conventionally g iven the extension .tgz or it can have tar.gz. This command creates the archive f ile filename.tar and then compresses it as the file filename.tgz. (The file f ilename.tar is not saved.) If you uncompress the filename.tgz file with the gunzi p command, the filename.tgz file is removed and replaced with filename.tar. * You can expand a gzip tar file( .tgz or .tar. gz) in one command: $ tar -xzvf filename.tgz 4. TEXT MANAGEMENT AND EDITORS

Page 79: Linux-Training-Volume1

There are a lot of text editors to choose from on L inux systems,but the majority of editors fit in one of the two families of editor : Emacs and Vi. Most users prefer one or the other. Some of the others available are pico, joe, vim, wi ly, xemacs etc. 4.1. The 'vi' editor v i-- the "visual editor" is guaranteed to be prese nt on any UNIX or Linux system . While using vi, at any one time you are in one of t hree modes of operation. * Command mode : This mode lets you use command s to edit files or change to other modes. For example, typing ``x'' while in com mand mode deletes the character underneath the cursor. The arrow keys mov e the cursor around the file you're editing. Generally, the commands used in com mand mode are one or two characters long. * Insert mode : You actually insert or edit tex t within insert mode. When using vi, you'll probably spend most of your time i n this mode. You start insert mode by using a command such as ``i'' (for ``insert '') from command mode. While in insert mode, you can insert text into the docume nt at the current cursor location. To end insert mode and return to command mode, press Esc. * Last line mode/Ex : is a special mode used to give certain extended commands to vi. While typing these commands, they a ppear on the last line of the screen (hence the name). For example, when you type ``:'' in command mode, y ou jump into last line mode and can use commands like ``wq'' (to write the file and quit vi), or ``q!'' (to quit vi without saving changes). Last line mode is generally used for vi commands that are longer than one character. In las t line mode, you enter a single-line command and press Enter to execute it. 4.1.1. Starting "vi" The syntax for vi is "vi filename " where filename is the name of the file to edit. $ vi test To edit the file test, you should see something lik e

Page 80: Linux-Training-Volume1

T he column of ``~'' characters indicates you are at the end of the file. 4.1.2. Inserting text. * The vi program when it starts is always in co mmand mode. * Insert text into the file by pressing i, whic h places the editor into insert mode, and begin typing. * Type as many lines as you want (pressing Ente r after each). You may correct mistakes with the Backspace key. * To end insert mode and return to command mode , press Esc. * There are several ways to insert text other t han the 'i' command. The 'a' command inserts text beginning after the current cu rsor position, instead of at the current cursor position. * To begin inserting text at the next line, use the o command. 4.1.3. Deleting text * From command mode, the x command deletes the character under the cursor. * You can delete entire lines using the command dd (that is, press d twice in a row). If the cursor is on the second line and you type dd, the second line will be deleted.

Page 81: Linux-Training-Volume1

* To delete the word that the cursor is on, use the dw command. Place the cursor on a word , and type dw to delete it. 4.1.4. Changing text * You can replace sections of text using the R command. Place the cursor on the first letter of a word "party'', press R, and t ype the word “hungry'' and the word party will be replaced by hungry. * Using R to edit text is like the i and a comm ands, but R overwrites, rather than inserts, text. * The r command replaces the single character u nder the cursor. For example, move the cursor to the beginning of the word ``Now' ', and press r followed by C, you'll see "Cow" instead. * The “~'' command changes the case of the le tter under the cursor from upper- to lower-case, and back. 4.1.5. Commands for moving the cursor * The 0 command (that's the zero key) moves the cursor to the beginning of the current line. * The $ command moves it to the end of the line . * When editing large files, you'll want to move forward or backward through the file a screenful at a time. Pressing Ctrl-F mov es the cursor one screenful forward, and Ctrl-B moves it a screenful back. *

Page 82: Linux-Training-Volume1

To move the cursor to the end of the file, pr ess G. You can also move to an arbitrary line; for example, typing the command 10G would move the cursor to line 10 in the file. To move to the beginning of th e file, use 1G. 4.1.6. Saving files and quitting vi * To quit vi without making changes to the file , use the command :q!. When you press the ``:'', the cursor changed to the last line or Exec mode and moves to the last line on the screen. * The command :wq saves the file and then exits vi. * The command ZZ (from command mode, without th e ``:'') is equivalent to :wq. * Remember that you must press Enter after a co mmand is entered in last line mode. * To save the file without quitting vi, use :w. 4.1.7. Editing another file * To edit another file, use the :e command. For example, to stop editing test and edit the file foo instead, use the command :e foo * If you use :e without saving the file first, you'll get an error message which means that vi doesn't want to edit another fi le until you save the first one. *

Page 83: Linux-Training-Volume1

If you use the :r command, you can include th e contents of another file in the current file. For example, the command :r foo.txt inserts the contents of the file foo.txt in the tex t at the location of the cursor. 4.1.8. Running shell commands * You can also run shell commands within vi. Th e :r! command works like :r, but rather than read a file, it inserts the output of the given command into the buffer at the current cursor location. * For example, if you use the command :r! ls -l You can also ``shell out'' of vi, in other words, r un a command from within vi, and return to the editor when you're done. * For example, if you use the command :! ls -F the ls -F command will be executed and the results displayed on the screen, but not inserted into the file you're editing. * If you use the command :shell vi starts an instance of the shell, letting you tem porarily put vi “on hold'' while you execute other commands. Just log out of t he shell (using the exit command) to return to vi. 4.2. The Emacs Editor To call Emacs a text editor does not do it justice -- it's a large application capable of performing many functions, including rea ding email.

Page 84: Linux-Training-Volume1

* GNU Emacs is the Emacs released under the aus pices of Richard Stallman, who wrote the original Emacs predecessor in the 197 0s. Emacs (formerly Lucid Emacs) offers essentially the same features GNU Ema cs does, but also contains its own features for use with the X Window System. 4.2.1. Getting Acquainted with Emacs Start Emacs in the usual way, either by choosing it from the menu supplied by your window manager in X, or by typing its name (in lowercase letters) at a shell prompt. To start GNU Emacs at a shell prompt, type: $ emacs * A file or other text open in Emacs is held in its own area called a buffer. By default, the current buffer appears in t he large area underneath the menu bar. To write text in the buffer, just type it . The place in the buffer where the cursor is at is called point, and is refe renced by many Emacs commands. * The horizontal bar near the bottom of the Ema cs window and directly underneath the current buffer is called the mode li ne; it gives information about the current buffer, including its name, what percentage of the buffer fits on the screen, what line point is on, and whether o r not the buffer is saved to a file. * The mode line also lists the modes active in the buffer. Emacs modes are general states that control the way Emacs behaves - - for example, when Overwrite mode is set, text you type overwrites the text at p oint; in Insert mode (the default), text you type is inserted at point. Usual ly, either Fundamental mode (the default) or Text mode will be listed. 4.2.1.1). Basic Emacs Editing Keys The following table lists basic editing keys and de scribes their function. Where two common keystrokes are available for a function, both are given. Note that C stands for the Ctrl key and M for the Escape key

Page 85: Linux-Training-Volume1

KEYS DESCRIPTION [ ] or Ctrl-p Move point up to the previous line. [↓] or Ctrl-n Move point down to the next line. [↠�] or Ctrl-b Move point back through the buffer one character to the left. [→] or Ctrl-f Move point forward through the buffer one character to the right. [PgUp] or Ctrl-v Move point forward through the buffer one screenful . [PgDn] or M-v Move point backward through the buffer one screenfu l. [BKSP] or C-h Delete character to the left of point. [DEL] or C-d

Page 86: Linux-Training-Volume1

Delete character to the right of point. [INS] Toggles between Insert mode and Overwrite mode. Ctrl-[SPC] Set mark (see Cutting Text). Ctrl-_ Undo the last action (control-underscore). Ctrl-a Move point to the beginning of the current line. Ctrl-e Move point to the end of the current line. Ctrl-h i Start Info. Ctrl-h F Open a copy of the Emacs FAQ in a new buffer. Ctrl-g Cancel the current command. Ctrl-h a function [Enter]

Page 87: Linux-Training-Volume1

List all Emacs commands related to function. Ctrl-h k key Describe key. Ctrl-h t Start the Emacs tutorial. Ctrl-k Kill text from point to end of line. Ctrl-u number Repeat the next command or keystroke you type numbe r times. Ctrl-w Kill text from mark to point. Ctrl-x Ctrl-c Save all buffers open in Emacs, and then exit the p rogram. C-x C-f file Open file in a new buffer for editing. To create a new file that does not yet exist, just specify the file name you want to give it. To browse through your files, type [TAB] instead of a file name. C-left-click

Page 88: Linux-Training-Volume1

Display a menu of all open buffers, sorted by major mode (works in X only). [SHIFT]-left-click Display a font selection menu (works in X Only) * You can run any Emacs function by typing M-x followed by the function name and pressing [RET]. To run the find-file function, type: M-x find-file This command runs the find-file function, which pro mpts for the name of a file and opens a copy of the file in a new buffer. * Type C-g in Emacs to quit a function or comma nd; if you make a mistake when typing a command, this is useful to cancel and abort the keyboard input. To exit the program -- just type C-x C-c. * Emacs can have more than one buffer open at o nce. To switch between buffers, type C-x C-b. Then, give the name of the b uffer to switch to, followed by [RET]; alternatively, type [RET] without a buffe r name to switch to the last buffer you had visited. (Viewing a buffer in Emacs is called visiting the buffer.) To switch to a buffer called `filemacs, type: C-x C-b filemacs * A special buffer called `*scratch*' is for no tes and things you don't want to save; it always exists in Emacs. To switch to the `*scratch*' buffer, type:

Page 89: Linux-Training-Volume1

C-x C-b *scratch* [RET] * Incidentally, C-h is the Emacs help key; all help-related commands begin with this key. For example, to read the Emacs FAQ, type C-h F, and to run the Info documentation browser (which contains The GNU Emacs Manual), type C-h i. 4.3. The pico editor One of the simplest text editors available for UNIX is PICO. It is PINE's default editor, so if you use PINE to read and comp ose e-mail, you are probably familiar with pico. pico is an easy editor to use, but it lacks a lot of features . Again, ^ stands for the <ctrl> key in the following commands: * To start PICO, type pico (all lowercase lette rs). $ pico * To edit a pre-existing file filename, or to c reate a new file with that name, type $ pico filename * To exit, type ^X. PICO will ask you whether y ou want to save your work if it is unsaved. * To save your work without quitting, type ^O. * To display the location of the cursor, type ^ C. *

Page 90: Linux-Training-Volume1

To cut a line (or lines) of text, move your c ursor to the lines you want to cut, and press ^K. To paste the last block of te xt you cut, press ^U. * To search for text, press ^W. (There is no se arch-and-replace in PICO.) * To get help, look at the bottom of the screen , or press ^G. 4.4. The editor “joe†� joe is a text screen editor.To create or modify fil e foo, type $ joe foo * Once you are in the editor, you can type in t ext and use special control-character sequences to perform other editing tasks. To find out what the control-character sequences are, read the man page or type Ctrl-K H for help in the editor. * Once you have typed Ctrl-K H, a menu of help topics appears on the bottom line. Use the arrow keys to select the topic and th en press the spacebar or ENTER to have help on that topic appear on the scre en. * The help window will appear in the top half o f the screen, and the editing window will be in the lower half of the screen. You can enter and edit text while viewing the help screen. Use the Ctrl-K H com mand again to dismiss the help window. 4.5. Text Manipulation 4.5.1. Searching for Text The primary command used for searching through text is the command called grep. It outputs lines of its input that contain a given string or pattern.The various options that can be used with grep are listed below . *

Page 91: Linux-Training-Volume1

To output lines in the file ‘catalog' conta ining the word 'audio'. $ grep audio catalog * To output lines in the file ‘catalog' conta ining the word `Compact Disc' $ grep 'Compact Disc' catalog * To output lines in the file `catalog' contain ing the string `compact disc' regardless of the case of its letters $ grep -i 'compact disc' catalog One thing to keep in mind is that grep only matches patterns that appear on a single line, so in the preceding example, if one li ne in `catalog' ends with the word `compact' and the next begins with `disc', gre p will not match either line. * You can specify more than one file to search. When you specify multiple files, each match that grep outputs is preceded by the name of the file it's in. To output lines in all of the files in the current directory containing the word ‘cd', type: $ grep cd * * To output lines in all of the `.txt' files in the `~/doc' directory containing the word `CD', suppressing the listing o f file names in the output, type: $ grep -h CD ~/doc/*.txt * Use the `-r' option to search a given directo ry recursively, searching all subdirectories it contains.To output lines containi ng the word `CD' in all of the `.txt' files in the `~/doc' directory and in al l of its subdirectories, type: $ grep -r CD ~/doc/*.txt 4.5.2. Matching Text Patterns using Regular Express ions

Page 92: Linux-Training-Volume1

In addition to word and phrase searches, you can us e grep to search for complex text patterns called regular expressions. A regular expression -- or "regexp"---is a text string of special characters that specifi es a set of patterns to match. There are a number of reserved characters called me tacharacters that don't represent themselves in a regular expression, but h ave a special meaning that is used to build complex patterns. These metacharacter s are as follows: ., *, [, ], ^, $, and \. To specify one of these literal characters in a reg ular expression, precede the character with a `\'. * To output lines in the file `catalog' that co ntain a `$' character, type: $ grep '\$' catalog * To output lines in the file `catalog' that co ntain the string `$1.99', type: $ grep '\$1\.99' catalog * To output lines in the file `catalog' that co ntain a `\' character, type: $ grep '\\' catalog 4.5.2.1). MetaCharacters and their meaning The following table describes the special meanings of the metacharacters and gives examples of their usage. META CHARACTER MEANING .

Page 93: Linux-Training-Volume1

Matches any one character, with the exception of th e newline character. For example, . matches `a', `1', `?', `.' (a literal pe riod character), and so forth. * Matches the preceding regexp zero or more times. Fo r example, matches `-', `--', `---', `--------', and so forth [ ] Encloses a character set, and matches any member of the set. For example, [abc] matches either `a', `b', or `c'. In addition, the hyphen (`-') and caret (`^') characters have special meanings when used inside brackets: - The hyphen specifies a range of characters, ordered according to their ASCII value .For example, [0-9] is synonymous with [01234 56789]; [A-Za-z] matches one uppercase or lowercase letter. To include a literal `-' in a list, specify it as the last character in a list:so [0-9-] matches eith er a single digit character or a `-' ^ As the first character of a list, the caret means t hat any character except those in the list should be matched. For example, [^a] matches any character except `a', and [^0-9] matches any character except a numeric digit. ^ Matches the beginning of the line. So ^a matches `a ' only when it is the first character on a line. $ Matches the end of the line. So a$ matches `a' only when it is the last character on a line. \

Page 94: Linux-Training-Volume1

Use \ before a metacharacter when you want to speci fy that its a literal character. So \$ matches a dollar sign character (` $'), and \\ matches a single backslash character (`\'). \< \> Matches the beginning (\<) or end (\>) of a word. F or example, \<the matches on "the" in the string "for the wise" but does not mat ch "the" in "otherwise". NOTE: this metacharacter is not supported by all ap plications. | Or two conditions together. For example (him|her) m atches the line "it belongs to him" and matches the line "it belongs to her" bu t does not match the line "it belongs to them." NOTE: this metacharacter is not s upported by all applications. + Matches one or more occurences of the character or regular expression immediately preceding. For example, the regular exp ression 9+ matches 9, 99, 999. NOTE: this metacharacter is not supported by a ll applications. ? Matches 0 or 1 occurence of the character or regula r expression immediately preceding.NOTE: this metacharacter is not supported by all applications. \{i\} Match a specific number of instances or instances w ithin a range of the preceding character. For example, the expression A[ 0-9]\{3\} will match "A" followed by exactly 3 digits. That is, it will matc h A123 but not A1234. \{i,j\} Match a specific number of instances or instances w ithin a range of the preceding character. The expression [0-9]\{4,6\} an y sequence of 4, 5, or 6 digits. NOTE: this metacharacter is not supported b y all applications. 4.5.2.2). Matching Lines Ending with Certain Text Use `$' as the last character of quoted text to mat ch that text only at the end of a line.

Page 95: Linux-Training-Volume1

* To output lines in the file `file1' ending wi th an exclamation point, type: $ grep '!$' file1 4.5.2.3). Matching Lines of a Certain Length * To match lines of a particular length, use th at number of `.' characters between `^' and `$'---for example, to match all lin es that are two characters (or columns) wide, use `^..$' as the regexp to sear ch for.To output all lines in `/usr/dict/words' that are exactly two characters w ide, type: $ grep '^..$' /usr/dict/words * To output all lines in `/usr/dict/words' that are exactly seventeen characters wide, type: $ grep '^.\{17\}$' /usr/dict/words * To output all lines in `/usr/dict/words' that are twenty-five or more characters wide, type: $ grep '^.\{25,\}$' /usr/dict/words 4.5.2.4). Matching Lines That Contain Any of Some R egexps * To output all lines in `playlist' that contai n either the patterns `the sea' or `cake', type: $ grep 'the sea\|cake' playlist 4.5.2.5). Matching Lines That Contain All of Some R egexps

Page 96: Linux-Training-Volume1

To output lines that match all of a number of regex ps, use grep to output lines containing the first regexp you want to match, and pipe the output to a grep with the second regexp as an argument. Continue add ing pipes to grep searches for all the regexps you want to search for. * To output all lines in `playlist' that contai n both patterns `the sea' and `cake', regardless of case, type $ grep -i 'the sea' playlist | grep -i cake 4.5.2.6). Matching Lines That Don't Contain a Regex p To output all lines in a text that don't contain a given pattern, use grep with the `-v' option -- this option reverts the sense of matching, selecting all non-matching lines. * To output all lines in `/usr/dict/words' that are not three characters wide, type: $ grep -v '^...$' * To output all lines in `access_log' that do n ot contain the string `http', type: $ grep -v http access_log 4.5.2.7). Matching Lines That Only Contain Certain Characters * To output lines in `/usr/dict/words' that onl y contain vowels, type: $ grep -i '^[aeiou]*$' /usr/dict/words *

Page 97: Linux-Training-Volume1

The `-i' option matches characters regardless of case; so, in this example, all vowel characters are matched regardles s of case. 4.5.2.8). Using a List of Regexps to Match From * To output all lines in `/usr/dict/words' cont aining any of the words listed in the file `forbidden-words', type: $ grep -f forbidden-words /usr/dict/words * To output all lines in `/usr/dict/words' that do not contain any of the words listed in `forbidden-words', regardless of ca se, type: $ grep -v -i -f forbidden-words /usr/dict/words 4.5.3. Searching More than Plain Text Files Use zgrep to search through text in files that are compressed. These files usually have a `.gz' file name extension, and can't be searched or otherwise read by other tools without uncompressing the file first. * To search through the compressed file `README .gz' for the text `Linux', type: $ zgrep Linux README.gz 4.5.4. Matching Lines in Web Pages You can grep a Web page or other URL by giving the URL to lynx with the `-dump' option, and piping the output to grep. * To search the contents of the URL http://exam ple.com/ for lines containing the text `edu' or `carma', type:

Page 98: Linux-Training-Volume1

lynx -dump http://example.com/ | grep 'edu\|carma' 4.5.5. Searching and Replacing Text * A quick way to search and replace some text i n a file is to use the following one-line perl command: $ perl -pi -e "s/oldstring/newstring/g;" file1 * In this example, oldstring is the string to s earch, newstring is the string to replace it with, and file1 is the name of the file or files to work on. You can use this for more than one file. * To replace the string `helpless' with the str ing `helpful' in all files in the current directory, type: $ perl -pi -e "s/helpless/helpful/g;" * 5. MORE ABOUT SHELL & COMMAND LINE INTERFACE 5.1. Passing Special Characters to Commands Some characters are reserved and have special meani ng to the shell on their own. Before you can pass one of these characters to a co mmand, you must quote it by enclosing the entire argument in single quotes ' '. * When the argument you want to pass has one or more single quote characters in it, enclose it in double quotes, $ grep "Please Don't Stop!" filename 5.2. Letting the Shell Complete What You Type

Page 99: Linux-Training-Volume1

Completion is where bash does its best to finish yo ur typing. To use it, press [TAB] on the input line and the shell will complete the word to the left of the cursor to the best of its ability. For example, suppose you want to specify, as an arg ument to the ls command, the `/usr/lib/emacs/20.4/, instead of typing out the wh ole directory name, you can type [TAB] to complete it for you $ ls /usr/lib/e[TAB] 5.3. Repeating the Last Command You Typed * Type the upward arrow key to put the last com mand you typed back on the input line. You can then type ENTER to run the comm and again, or you can edit the command first. * To put the last command you entered containin g the string `grep' back on the input line, type: $ Ctrl-r (reverse-i-search)`': grep * To put the third-to-the-last command you ente red containing the string grep back on the input line, type: $ C-r (reverse-i-search)`': grep C-r C-r * When a command is displayed on the input line , type [RET] to run it. You can also edit the command line as usual. 5.4. Running a List of Commands To run more than one command on the input line, typ e each command in the order you want them to run, separating each command from the next with a semicolon (`;').

Page 100: Linux-Training-Volume1

* To clear the screen and then log out of the s ystem, type: $ clear; logout 5.5. Redirecting Input and Output * The standard output is where the shell stream s the text output of commands -- the screen on your terminal, by default. * The standard input, typically the keyboard, i s where you input data for commands. When a command reads the standard input, it usually keeps reading text until you type C-d on a new line by itself. * When a command runs and exits with an error, the error message is usually output to your screen, but as a separate stream cal led the standard error. * You redirect these streams -- to a file, or e ven another command -- with redirection. The following sections describe the sh ell redirection operators that you can use to redirect standard input and out put. 5.5.1. Redirecting Input to a File To redirect standard input to a file, use the `<' o perator. To do so, follow a command with < and the name of the file it should t ake input from. apropos searches a set of database files containing short descriptions of system commands for keywords and displays the result on th e standard output. For example, instead of giving a list of words as a rguments to apropos you can redirect standard input to a file containing a list of keywords to use. To redirect standard input for apropos to file `key words', type: $ apropos < keywords

Page 101: Linux-Training-Volume1

5.5.2. Redirecting Output to a File Use the `>' operator to redirect standard output to a file. To use it, follow a command with > and the name of the file the output should be written to. * To redirect standard output of the command †˜ls –la’ to the file ‘filelist', type: $ ls –la > filelist * To append the standard output of ‘ls –laâ €™ to an existing file `commands', type: $ ls -la >> commands 5.5.3. Redirecting Error Messages to a File To redirect the standard error stream to a file, us e the `>' operator preceded by a `2'. Follow a command with 2> and the name of the file the error stream should be written to. * To redirect the standard error of ‘ls –la ’ to the file `command.error', type: $ ls –la 2> command.error * As with the standard output, use the `>>' ope rator instead of `>' to append the standard error to the contents of an exi sting file. To append the standard error of apropos shells to an existing fil e `command.error', type: $ ls –la 2>> command.error * To redirect both standard output and standard error to the same file, use `&>' instead.

Page 102: Linux-Training-Volume1

To redirect the standard output and the standard er ror of ls –la to the file `commands', type: $ apropos shells &> commands 5.5.4. Redirecting Output to Another Command's Inpu t Piping is when you connect the standard output of o ne command to the standard input of another. You do this by specifying the two commands in order, separated by a vertical bar character, `|' (sometimes called a "pipe"). Commands built in this fashion are called pipelines. * To pipe the output of ‘cat readme.txt’ to less, $ cat readme.txt | less * To pipe the output of the ls command to the g rep command you can use $ ls -la | grep html 6. BASICS OF LINUX SYSTEM ADMINISTRATION 6.1. Disks, Partitions and File Systems The basic tasks in administering disks are: 1. Formatting your disk. This does various thing s to prepare it for use, such as checking for bad sectors. (Formatting is nowaday s not necessary for most hard disks.) 2. Partition a hard disk, if you want to use it for several activities that aren't supposed to interfere with one another. One reason for partitioning is to store different operating systems on the same disk. Another reason is to keep user files separate from system files, which simpli fies back-ups and helps protect the system files from corruption. 3. Make a filesystem (of a suitable type) on eac h disk or partition. The disk means nothing to Linux until you make a filesystem; then files can be created and accessed on it.

Page 103: Linux-Training-Volume1

4. Mount different filesystems to form a single tree structure, either automatically, or manually as needed. (Manually mou nted filesystems usually need to be unmounted manually as well.) 6.1.1. Character and Block devices Linux recognizes two different kinds of device: * random-access block devices (such as disks) * character devices (such as tapes and serial l ines), some of which may be serial, and some random-access. Each supported device is represented in the filesys tem as a device file. When you read or write a device file, the data comes fro m or goes to the device it represents. For example, to send a file to the prin ter, one could just say $ cat filename > /dev/lp1 and the contents of the file are printed. * Note that usually all device files exist even though the device itself might be not be installed. So just because you have a file /dev/sda, it doesn't mean that you really do have an SCSI hard disk. * Each hard disk is represented by a separate d evice file. There can (usually) be only two or four IDE hard disks. These are known as /dev/hda, /dev/hdb, /dev/hdc, and /dev/hdd, respectively. * SCSI hard disks are known as /dev/sda, /dev/s db, and so on. 6.1.2. Partitions/MBR *

Page 104: Linux-Training-Volume1

A hard disk can be divided into several parti tions. Each partition functions as if it were a separate hard disk. 6.1.2.1). Why Partition Hard Drive(s) While it is true that Linux will operate just fine on a disk with only one large partition defined, there are several advantages to partitioning your disk for at least the four main file systems (root, usr, home, and swap). These include: 1. Reduce time required for fsck : First, it may reduce the time required to perform file system checks (both upon bootup and wh en doing a manual fsck), because these checks can be done in parallel. Also, file system checks are a lot easier to do on a system with multiple partitions. For example, if I knew my /home partition had problems, I could simply unmoun t it, perform a file system check, and then remount the repaired file system 2. Mount partitions as read-only : Second, with multiple partitions, you can, if you wish, mount one or more of your partitions a s read-only. For example, if you decide that everything in /usr will not be touc hed even by root, you can mount the /usr partition as read-only. 3. Protecting your file systems: Finally, the mo st important benefit that partitioning provides is protection of your file sy stems. If something should happen to a file system (either through user error or system failure), on a partitioned system you would probably only lose fil es on a single file system. On a non-partitioned system, you would probably los e them on all file systems. 4. Multiple OS Support : Finally, since Linux al lows you to set up other operating system(s) (such as Windows 95/98/NT), and then dual- (or triple-, ...) boot your system, you might wish to set up addition al partitions to take advantage of this. Typically, you would want to set up at least one separate partition for each operating system. Linux includes a decent boot loader which allows you to specify which operating system you wa nt to boot at power on. 6.1.2.2). Master Boot Record or MBR The information about how a hard disk has been part itioned is stored in its first sector (that is, the first sector of the firs t track of the first disk surface).

Page 105: Linux-Training-Volume1

* The first sector of the primary hard drive is the master boot record (MBR) of the disk; this is the sector that the BIOS reads in and starts when the machine is first booted. * The master boot record is only 512 bytes in s ize and contains a small program that reads the partition table, checks whic h partition is active (that is, marked bootable), and reads the first sector of that partition, the partition's boot sector (the MBR is also a boot sec tor, but it has a special status and therefore a special name). * This boot sector contains another small progr am that reads the first part of the operating system stored on that partition (a ssuming it is bootable), and then start it. * The booting process will be dealt with in mor e detail later on. 6.1.2.3). Partitioning Scheme * The partitioning scheme is not built into the hardware, or even into the BIOS. * It is only a convention that many operating s ystems follow. Not all operating systems follow it, but they are the excep tions and an operating system that doesn't support partitions cannot co-exist on the same disk with any other operating system. You can see the partitions on a machine using the f disk command as below. $ fdisk –l Disk /dev/hda: 15 heads, 56 sectors, 690 cylinders Units = cylinders of 855 * 512 bytes

Page 106: Linux-Training-Volume1

Device Boot Begin Start End Blocks Id System /dev/hda1 * 1 1 24 1023 83 Linux native /dev/hda2 25 25 48 10260 83 Linux native /dev/hda3 49 49 408 153900 83 Linux native /dev/hda4 409 409 690 163305 5 Extended /dev/hda5 409 409 644 143611+ 83 Linux native /dev/hda6 645 645 690 19636+ 83 Linux native Extended and logical partitions The original partitioning scheme for PC hard disks allowed only four partitions. This quickly turned out to be too little in real li fe, partly because some people want more than four operating systems (Linux , MS-DOS, FreeBSD, NetBSD, or Windows/NT, to name a few), but primarily because s ometimes it is a good idea to have several partitions for one operating system. To overcome this design problem, extended partition s were invented. This trick allows partitioning a primary partition into sub-pa rtitions. * The primary partition thus subdivided is the extended partition; * The sub-partitions of an extended partition a re logical partitions. They behave like primary partitions, but are created dif ferently. There is no speed difference between them. The partition structure of a hard disk might look l ike that in Figure below. The disk is divided into three primary partitions, the second of which is divided into two logical partitions. Part of the disk is no t partitioned at all. The disk as a whole and each primary partition has a bo ot sector. A sample hard disk partitioning. 6.1.2.4). Partition types

Page 107: Linux-Training-Volume1

* The partition tables (the one in the MBR, and the ones for extended partitions) contain one byte per partition that ide ntifies the type of that partition. This attempts to identify the operating system that uses the partition, or what it is used for. * The purpose is to make it possible to avoid h aving two operating systems accidentally using the same partition. There is no standardization agency to specify what each byte value means, but some commonly accepted ones are included in the tab le below. 0 Empty 40 Venice 80286 94 Amoeba BBT 1 DOS 12-bit FAT 51 Novell? a5 BSD/386 2

Page 108: Linux-Training-Volume1

Xenix root 52 Microport b6 BSDI fs 3 Xenix usr 63 GNU HURD b8 BSDI swap 4 DOS 16-bit FAT<32M 64 Novell e1 DOS access 5 Extended 65

Page 109: Linux-Training-Volume1

PC/IX f2 DOS 6 DOS 16-bit >=32M 80 Old MINIX 6 OS/2 HPFS 81 Linux/MINIX 8 AIX 82 Linux swap 9

Page 110: Linux-Training-Volume1

AIX bootable 83 Linux native 6.1.2.5). Partitioning a hard disk There are many programs for creating and removing p artitions.The most commonly used one is ‘fdisk’. Some points to keep in mind are: * When using IDE disks, the boot partition (the partition with the bootable kernel image files) must be completely within the f irst 1024 cylinders. This is because the disk is used via the BIOS during boot ( before the system goes into protected mode), and BIOS can't handle more than 10 24 cylinders. Therefore, make sure your boot partition is completely within the f irst 1024 cylinders * Each partition should have an even number of sectors, since the Linux filesystems use a 1 kilobyte block size, i.e., two sectors. An odd number of sectors will result in the last sector being unused . This won't result in any problems, but it is ugly, and some versions of fdis k will warn about it. * Changing a partition's size usually requires first backing up everything you want to save from that partition ,deleting the partition, creating new partition, then restoring everything to the new par tition 6.1.2.6). Various Mount Points Here is a description of the various mount points a nd file system information, which may give you a better idea of how to best def ine your partition sizes for your own needs:

Page 111: Linux-Training-Volume1

1. / (root) - used to store things like temporar y files, the Linux kernel and boot image, important binary files (things that are needed before Linux can mount the /usr partition), and more importantly log files, spool areas for print jobs and outgoing e-mail, and user's incoming e-mai l. It is also used for temporary space when performing certain operations, such as building RPM packages from source RPM files 2. /usr/ - should be the largest partition, beca use most of the binary files required by Linux, as well as any locally installed software, web pages , some locally-installed software log files, etc. are stor ed here. The partition type should be left as the default of 83 (Linux native). 3. /home/ - typically if you aren't providing sh ell accounts to your users, you don't need to make this partition very big. The exception is if you are providing user home pages (such web pages), in whic h case you might benefit from making this partition larger. Again, the partition type should be left as the default of 83 (Linux native). 4. swap - Linux provides something called "virtu al memory" to make a larger amount of memory available than the physical RAM in stalled in your system. The swap partition is used with main RAM by Linux to ac complish this. As a rule of thumb, your swap partition should be at least doubl e the amount of physical RAM installed in your system.If you have more than one physical hard drive in your system, you can create multiple swap partitions. Th e partition type needs to be changed to 82 (Linux swap). 5. /var/ (optional) - You may wish to consider s plitting up your / root partition a bit further. The /var directory is used for a great deal of runtime storage, including mail spools (both ingoing and ou tgoing), print jobs, process locks, etc. Having this directory mounted under / ( root) may be a bit dangerous because a large amount of incoming e-mail (for exam ple), may suddenly fill up the partition. Since bad things can when the / (roo t) partition fills up, having /var on its own partition may avoid such problems. The partition type should be left as the default of 83 (Linux native). 6. /boot/ (optional) - In some circumstances (su ch as a system set up in a software RAID configuration) it may be necessary to have a separate partition from which to boot the Linux system. This partition would allow booting and then

Page 112: Linux-Training-Volume1

loading of whatever drivers are required to read th e other file systems. The size of this partition can be as small as a couple of Mb (approx 10 Mb) The partition type should be left as the default of 83 (Linux native). 7. /backup (optional) - If you have any extra sp ace lying around, perhaps you would benefit from a partition for a directory call ed, for example, /backup. The partition type can be left as the default of 83 (Li nux native). Example : Settings up partitions To give you an example of how one might set up part itions, you can verify below. Device Boot Start End Blocks Id System /dev/hda1 * 1 254 1024096+ 6 16-bit >=32M DOS /dev/hda2 255 682 2128896 5 Extended /dev/hda3 255 331 310432+ 83 Linux native /dev/hda5 332 636 1229628+ 83 Linux native /dev/hda6 636 649 455584+ 83 Linux native /dev/hda8 650 682 133024+ 82 Linux swap * The first partition, /dev/hda1, is a DOS-form atted file system used to store the alternative operating system (Windows 95) . This gives 1 Gb of space for that operating system. * The second partition, /dev/hda2, is a physica l partition (called "extended") that encompasses the remaining space on the drive. * The third through fifth partitions, /dev/hda3 , /dev/hda5, and /dev/hda6, are all e2fs-formatted file systems used for the / (root), /usr, and the /home partitions, respectively. *

Page 113: Linux-Training-Volume1

Finally, the sixth partition, /dev/hda8, is u sed for the swap partition. For yet another example, this time is a box with tw o hard drives (sole boot, Linux only), you can choose the following partition ing scheme: Device Boot Start End Blocks Id System /dev/sda1 * 1 1 2046 4 DOS 16-bit <32M /dev/sda2 2 168 346859 83 Linux native /dev/sda3 169 231 130851 82 Linux swap /dev/sda4 232 1009 1615906 5 Extended /dev/sda5 232 398 346828 83 Linux native /dev/sda6 399 1009 1269016 83 Linux native /dev/sdb1 1 509 2114355 83 Linux native /dev/sdb2 510 1019 2118540 83 Linux native * The first partition, /dev/sda1, is a DOS-form atted file system used to store the LILO boot loader. The Alpha platform has a slightly different method of booting than an Intel system does, therefore Lin ux stores its boot information in a FAT partition. This partition only needs to be as large as the smallest possible partition allowed -- in this case , 2Mb. * The second partition, /dev/sda2, is an e2fs-f ormatted file system used for the / (root) partition. * The third partition, /dev/sda3, is used for t he swap partition. * The fourth partition, /dev/sda4, is an "exten ded" partition (see previous example for details). * The fifth and sixth partitions, /dev/sda5, an d /dev/sda6, are e2fs-formatted file systems used for the /home and /usr partitions, respectively.

Page 114: Linux-Training-Volume1

* The seventh partition, /dev/sdb1, is an e2fs- formatted file system used for the /archive partition. * The eighth and final partition, /dev/sdb2, is an e2fs-formatted file system used for the /archive2 partition. After you finish setting up your partition informat ion, you'll need to write the new partition to disk. After this, the Red Hat inst allation program reloads the partition table into memory, so you can continue on to the next step of the installation process. 6.1.2.7). Device files and partitions Each partition and extended partition has its own d evice file. * The naming convention for these files is that a partition's number is appended after the name of the whole disk, with the convention that 1-4 are primary partitions (regardless of how many primary partitions there are). * Number greater than 5 are logical partitions (regardless of within which primary partition they reside). * For example, /dev/hda1 is the first primary p artition on the first IDE hard disk, and /dev/sdb6 is the third extended part ition on the second SCSI hard disk. 6.1.3. FileSystems What are filesystems? A filesystem is the methods and data structures tha t an operating system uses to keep track of files on a disk or partition; that is , the way the files are organized on the disk. *

Page 115: Linux-Training-Volume1

The difference between a disk or partition an d the filesystem it contains is important. A few programs (including, reasonably enough, programs that create filesystems) operate directly on the raw sectors of a disk or partition; if there is an existing file system there, it will be destroyed or seriously corrupted. * Most programs operate on a filesystem, and th erefore won't work on a partition that doesn't contain one (or that contain s one of the wrong type). * Making a file system : Before a partition or disk can be used as a filesystem, it needs to be initialized, and the boo kkeeping data structures need to be written to the disk. This process is called m aking a filesystem. Some terms related to file system Some of the common terms which you come across rela ted to file systems are superblock, inode, data block, directory block, and indirection block. * The superblock contains information about the filesystem as a whole, such as its size, access rights and time of the last mod ification. (the exact information here depends on the filesystem). * An inode contains all information about a fil e, except its name. The name is stored in the directory, together with the numbe r of the inode. * A directory entry consists of a filename and the number of the inode which represents the file. * The inode contains the numbers of several dat a blocks, which are used to store the data in the file. *

Page 116: Linux-Training-Volume1

There is space only for a few data block numb ers in the inode, however, and if more are needed, more space for pointers to the data blocks is allocated dynamically. These dynamically allocated blocks are indirect blocks; the name indicates that in order to find the data block, one has to find its number in the indirect block first. 6.1.3.1). Some of the Linux Filesystems Linux supports several types of filesystems. Some o f the important ones are. 1. ext3 : ext3 filesystem has all the features o f the ext2 filesystem. The difference is, journaling has been added. This impr oves performance and recovery time in case of a system crash. This has become mor e popular than ext2. 2. ext2 : The most featureful of the native Linu x filesystems. It is designed to be easily upwards compatible, so that new versio ns of the filesystem code do not require re-making the existing filesystems. 3. ext : An older version of ext2 that wasn't up wards compatible. It is hardly ever used in new installations any more, and most people have converted to ext2. 4. vfat : This is an extension of the FAT filesy stem known as FAT32. It supports larger disk sizes than FAT. Most MS Window s disks are vfat. 5. nfs : A networked filesystem that allows shar ing a filesystem between many computers to allow easy access to the files from al l of them. 6. physical volume (LVM) — Creating one or mor e physical volume (LVM) partitions allows you to create an LVM logical volu me 7. software RAID — Creating two or more softwa re RAID partitions allows you to create a RAID device. 8. swap — Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when the re is not enough RAM to store the data your system is processing.

Page 117: Linux-Training-Volume1

9. smbfs : A networks filesystem which allows sh aring of a filesystem with an MS Windows computer. It is compatible with the Wind ows file sharing protocols. Journaled File System A filesystem that uses journaling is also called a journaled filesystem. A journaled filesystem maintains a log, or journal, o f what has happened on a filesystem. * In the event of a system crash, a journaled f ilesystem is designed to use the filesystem's logs to recreate unsaved and lost data. This makes data loss much less likely and is likely become a standard fe ature in Linux filesystems. * Currently, ext3 is the most popular filesyste m, because it is a journaled filesystem 6.1.4. Software RAID RAID stands for Redundant Array of Independent Disk s. The basic idea behind RAID is to combine multiple small, inexpensive disk driv es into an array to accomplish performance or redundancy goals not atta inable with one large and expensive drive. This array of drives will appear t o the computer as a single logical storage unit or drive. * RAID is a method in which information is spre ad across several disks, using techniques such as disk striping (RAID Level 0), disk mirroring (RAID level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency and/or increase bandwidth for reading or writing to disks, and maximize the ability to recover from hard disk cras hes. * The underlying concept of RAID is that data m ay be distributed across each drive in the array in a consistent manner. * To do this, the data must first be broken int o consistently-sized chunks (often 32K or 64K in size, although different sizes can be used). Each chunk is then written to a hard drive in RAID according to t he RAID level used.

Page 118: Linux-Training-Volume1

* When the data is to be read, the process is r eversed, giving the illusion that multiple drives are actually one large drive. 6.1.4.1). Advantages of using RAID Primary reasons to use RAID include: * Enhanced speed * Increased storage capacity using a single vir tual disk * Lessened impact of a disk failure 6.1.4.2). Hardware and Software RAID There are two possible RAID approaches: Hardware RA ID and Software RAID. Hardware RAID * The hardware-based system manages the RAID su bsystem independently from the host and presents to the host only a single dis k per RAID array. * An example of a Hardware RAID device would be one that connects to a SCSI controller and presents the RAID arrays as a single SCSI drive. * An external RAID system moves all RAID handli ng "intelligence" into a controller located in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and appears to the host as a single or multiple disk. Software RAID * Software RAID implements the various RAID lev els in the kernel disk (block device) code. *

Page 119: Linux-Training-Volume1

It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis (A hot-swap chassis allow s you to remove a hard drive without having to power-down your system) are not r equired. * Software RAID also works with cheaper IDE dis ks as well as SCSI disks. With today's fast CPUs, Software RAID performance c an excel against Hardware RAID. * The MD driver in the Linux kernel is an examp le of a RAID solution that is completely hardware independent. The Linux MD drive r supports currently RAID levels 0/1/4/5 + linear mode. * The performance of a software-based array is dependent on the server CPU performance and load. 6.1.4.3). Different Types of Raid Implementations The current RAID drivers in Linux supports the foll owing levels of Software RAID implementations. Level 0 * RAID level 0, often called "striping," is a p erformance-oriented striped data mapping technique. * This means the data being written to the arra y is broken down into strips and written across the member disks of the array, a llowing high I/O performance at low inherent cost but provides no redundancy. * The storage capacity of a level 0 array is eq ual to the total capacity of the member disks in a Hardware RAID or the total ca pacity of member partitions in a Software RAID. * There is no redundancy in this level and if y ou remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device . e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.

Page 120: Linux-Training-Volume1

Level 1 * RAID level 1, or "mirroring," has been used l onger than any other form of RAID. * Level 1 provides redundancy by writing identi cal data to each member disk of the array, leaving a "mirrored" copy on each dis k. * Mirroring remains popular due to its simplici ty and high level of data availability. * This is the first mode which actually has red undancy. * RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the informat ion on one disk on the other disk(s). Of Course, the disks must be of equal size . * If one disk is larger than another, your RAID device will be the size of the smallest disk. * Level 1 provides very good data reliability a nd improves performance for read-intensive applications but at a relatively hig h cost. * The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or on e of the mirrored partitions in a Software RAID. Level 4

Page 121: Linux-Training-Volume1

* Level 4 uses parity concentrated on a single disk drive to protect data. * It can be used on three or more disks. Instea d of completely mirroring the information, it keeps parity information on one dri ve, and writes data to the other disks in a RAID-0 like way. * If one drive fails, the parity information ca n be used to reconstruct all data. If two drives fail, all data is lost. * The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the pa rity disk will become a bottleneck, if it is not a lot faster than the othe r disks. * Although RAID level 4 is an option in some RA ID partitioning schemes, it is not an option allowed in Red Hat Linux RAID inst allations. Level 5 * This is the most common type of RAID. It can be used on three or more disks, with zero or more spare-disks. * The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drive s, avoiding the bottleneck problem in RAID-4. * The only performance bottleneck is the parity calculation process. With modern CPUs and Software RAID, that usually is not a very big problem. *

Page 122: Linux-Training-Volume1

The storage capacity of Hardware RAID level 5 is equal to the capacity of member disks, minus the capacity of one member disk . * If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstr uction will begin immediately after the device failure. If two disks fail simulta neously, all data are lost. RAID-5 can survive one disk failure, but not two or more. Linear RAID * Linear RAID is a simple grouping of drives to create a larger virtual drive. * The disks are "appended" to each other, so wr iting linearly to the RAID device will fill up disk 0 first, then disk 1 and s o on. The disks does not have to be of the same size. In fact, size doesn't matte r at all here. * There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be luc ky to recover some data, since the filesystem will just be missing one large consecutive chunk of data. * The capacity is the total of all member disks . 6.1.5. Logical Volume Manager (LVM) LVM is a method of allocating hard drive space into logical volumes that can be easily resized instead of partitions. * With LVM, the hard drive or set of hard drive s is allocated to one or more logical volumes. * Since a physical volume can not span over mor e than one drive, if you want the logical volume group to span over more than one drive, you must create one or more logical volumes per drive. *

Page 123: Linux-Training-Volume1

The physical volumes are combined into logica l volume groups, with the exception of the /boot partition. The /boot partiti on can not be on a logical volume group because the boot loader can not read i t. * If you want to have the root / partition on a logical volume, you will need to create a separate /boot partition which is not a part of a volume group. * The logical volume group is divided into logi cal volumes, which are assigned mount points such as /home and / and file system types such as ext3. * When "partitions" reach their full capacity, free space from the logical volume group can be added to the logical volume to increase the size of the partition. * When a new hard drive is added to the system, it can be added to the logical volume group, and the logical volumes that are the partitions can be expanded. * On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of define d sizes. If a partition becomes full, it is not easy to expand the size of the partition. * LVM support must be compiled into the kernel. The default kernel for Red Hat Linux 9 is compiled with LVM support 6.2.RedHat Installation and Hardware Configuration Red Hat Linux 9 should be compatible with most hard ware in systems that were factory built within the last two years. Before you start the installation process, one of t he following conditions must be met: * Your computer must have enough disk space for the installation of Red Hat Linux. * You must have one or more partitions that may be deleted, thereby freeing up enough disk space to install Red Hat Linux. 6.2.1. Preparing for Installation

Page 124: Linux-Training-Volume1

6.2.1.1). Installation Disk Space Requirements * Personal Desktop A personal desktop installation, including a graphi cal desktop environment, requires at least 1.6GB of free space. Choosing bot h the GNOME and KDE desktop environments requires at least 1.8GB of free disk s pace. * Workstation A workstation installation, including a graphical d esktop environment and software development tools, requires at least 2.1GB of free space. Choosing both the GNOME and KDE desktop environments requires at least 2.2GB of free disk space. * Server A server installation requires 850MB for a minimal installation without X (the graphical environment), at least 1.5GB of free spac e if all package groups other than X are installed, and at least 5.0GB to install all packages including the GNOME and KDE desktop environments. * Custom A Custom installation requires 465MB for a minimal installation and at least 5.0GB of free space if every package is selected. 6.2.1.2). Installation Methods The following installation methods are available: * CD-ROM If you have a CD-ROM drive and the Red Hat Linux CD -ROMs, you can use this method. You will need a boot diskette or a bootable CD-ROM. *

Page 125: Linux-Training-Volume1

Hard Drive If you have copied the Red Hat Linux ISO images to a local hard drive, you can use this method. You will need a boot diskette. Har d drive installations require the use of the ISO (or CD-ROM) images. An ISO image is a file containing an exact copy of a CD-ROM disk image * NFS Image If you are installing from an NFS server using ISO images or a mirror image of Red Hat Linux, you can use this method. You will ne ed a network driver diskette. * FTP If you are installing directly from an FTP server, use this method. You will need a network driver diskette. * HTTP If you are installing directly from an HTTP (Web) s erver, use this method. You will need a network driver diskette. 6.2.1.3). Choosing the Installation Class 1. Personal Desktop Installations Minimum Requirements * Personal Desktop: 1.6GB * Personal Desktop choosing both GNOME and KDE: 1.8GB * With all package groups (for example, Office/ Productivity is a group of packages) : 5.0GB minimum. What a Personal Desktop Installation Will Do:

Page 126: Linux-Training-Volume1

If you choose automatic partitioning, a personal de sktop installation will create the following partitions: * The size of the swap partition is determined by the amount of RAM in your system and the amount of space available on your ha rd drive. For example, if you have 128MB of RAM then the swap partition created c an be 128MB – 256MB (twice your RAM), depending on how much disk space is avai lable. * A 100MB partition mounted as /boot in which t he Linux kernel and related files reside. * A root partition mounted as / in which all ot her files are stored (the exact size of this partition is dependent on your a vailable disk space). 2. Workstation Installations Minimum Requirements : * Workstation: 2.1GB * Workstation choosing both GNOME and KDE: 2.2G B * With all package groups : 5 GB or more What a Workstation Installation Will Do If you choose automatic partitioning, a workstation installation will create the partitions in the same way as for the personal desk top. Server Installations Minimum Requirements :

Page 127: Linux-Training-Volume1

* Server (minimum, no graphical interface): 850 MB * Server (choosing everything, no graphical int erface): 1.5GB * Server (choosing everything, including a grap hical interface): 5.0GB * With all software packages: 5GB and more What a Server Installation Will Do If you choose automatic partitioning, a server inst allation will create the partitions in the same way as for the workstation. Custom Installations The custom installation allows you the most flexibi lity during your installation. During a custom installation, you hav e complete control over the packages that are installed on your system. Recommended Minimum Requirements: * Custom (minimum): 465MB * Custom (choosing everything): 5.0GB What a Custom Installation Will Do: As you might guess from the name, a custom installa tion puts the emphasis on flexibility. You have complete control over which p ackages will be installed on your system. If you choose automatic partitioning, a custom inst allation will create the partitions in the same format as we have discussed above. Upgrading Your System

Page 128: Linux-Training-Volume1

Upgrading Red Hat Linux 6.2 (or greater) will not d elete any existing data. The installation program updates the modular kernel and all currently installed software packages. 6.2.1.4). Hardware/System Information Required The hardware or system info that you are required t o know to make your Red Hat Linux installation go more smoothly are given below though most of them will be automatically detected by the installation software . * Hard drive(s): type, label, size; ex: IDE hda =1.2 GB * Partitions: map of partitions and mount point s; ex: /dev/hda1=/home, /dev/hda2=/ (fill this in once you know where they will reside) * memory: amount of RAM installed on your syste m; ex: 64 MB, 128 M * CD-ROM: interface type; ex: SCSI, IDE (ATAPI) * SCSI adapter: if present, make and model numb er; ex: BusLogic SCSI Adapter * network card: if present, make and model numb er; ex: Tulip, 3COM 3C590 * mouse: type, protocol, and number of buttons; ex: generic 3 button PS/2 mouse, MouseMan 2 button serial mouse * monitor: make, model, and manufacturer specif ications; ex: Optiquest Q53, ViewSonic G663 * video card: make, model number and size of VR AM; ex: Creative Labs Graphics Blaster 3D, 8MB

Page 129: Linux-Training-Volume1

* sound card: make, chipset and model number; e x: S3 SonicVibes, Sound Blaster 32/64 AWE 6.2.2. RedHat Installation Procedure To start the installation, you must first boot the installation program. You can boot the installation program using the bootable CD -ROM. Your BIOS settings may need to be changed to allow you to boot from the di skette or CD-ROM. After a short delay, a screen containing the boot: prompt should appear. The screen contains information on a variet y of boot options. Each boot option also has one or more help screens associated with it. To access a help screen, press the appropriate function key as liste d in the line at the bottom of the screen. Normally, you only need to press [Enter] to boot. W atch the boot messages to see if the Linux kernel detects your hardware. If your hardware is properly detected, please continue to the next section. If i t does not properly detect your hardware, you may need to restart the installa tion in expert mode. * If you do not wish to perform a graphical ins tallation, you can start a text mode installation using the following boot com mand: boot: linux text * If the installation program does not properly detect your hardware, you may need to restart the installation in expert mode . Enter expert mode using the following boot command: boot: linux noprobe * For text mode installations in expert mode, u se: boot: linux text noprobe

Page 130: Linux-Training-Volume1

Expert mode disables most hardware probing, and giv es you the option of entering options for the drivers loaded during the installat ion. The initial boot messages will not contain any references to SCSI or network cards. This is normal; these devices are supported by modules that are loaded during the installation process. 6.2.2.1). Initial Installation Steps 1. Put your linux installation CD-ROM into the drive and boot from the CD. 2. Language Selection : Using your mouse, select the language you would prefer to use for the installation. (English). Once you select the appropriate language, click Next to continue. 3. Keyboard Configuration : Using your mou se, select the correct layout type (for example, U.S. English) for the keyboard y ou would prefer to use for the installation and as the system default. Once yo u have made your selection, click Next to continue. 4. Mouse Configuration : Choose the correc t mouse type for your system. If you cannot find an exact match, choose a mouse t ype that you are sure is compatible with your system. The Emulate 3 buttons checkbox allows you to use a two-button mouse as if it had three buttons. In gen eral, the graphical interface (the X Window System) is easier to use with a three -button mouse. If you select this checkbox, you can emulate a third, "middle" bu tton by pressing both mouse buttons simultaneously. 5. Choosing to Upgrade or Install : To per form a new installation of Red Hat Linux on your system, select Perform a new Red Hat Linux installation and click Next. 6. Installation Type : Choose the type of installation you would like to perform .Red Hat Linux allows you to choose the installation type that best fits your needs. Your options are Personal Desktop, Workstation, Server, Custom, and Upgrade.

Page 131: Linux-Training-Volume1

6.2.3. Disk Partitioning Setup On this screen, you can choose to perform automatic partitioning, or manual partitioning using Disk Druid. * Automatic partitioning allows you to perform an installation without having to partition your drive(s) yourself. If you do not feel comfortable with partitioning your system, it is recommended that yo u do not choose to partition manually and instead let the installation program p artition for you. * To partition manually, choose the Disk Druid partitioning tool. 6.2.3.1). Automatic Partitioning Automatic partitioning allows you to have some cont rol concerning what data is removed (if any) from your system. Your options are : * Remove all Linux partitions on this system †” select this option to remove only Linux partitions (partitions created fr om a previous Linux installation). This will not remove other partition s you may have on your hard drive(s) (such as VFAT or FAT32 partitions). * Remove all partitions on this system — sele ct this option to remove all partitions on your hard drive(s) (this includes par titions created by other operating systems such as Windows 9x/NT/2000/ME/XP or NTFS partitions). * Keep all partitions and use existing free spa ce — select this option to retain your current data and partitions, assuming y ou have enough free space available on your hard drive(s). 6.2.3.2). Manual Partitioning Using Disk Druid The partitioning tool used by the installation prog ram is Disk Druid. Above the display, you will see the drive name (such as /dev/ hda), the geom (which shows the hard disk's geometry and consists of three numb ers representing the number of cylinders, heads, and sectors as reported by the hard disk), and the model of the hard drive as detected by the installation prog ram. Disk Druid's Buttons * New: Used to request a new partition. When se lected, a dialog box appears containing fields (such as mount point and size) th at must be filled in. *

Page 132: Linux-Training-Volume1

Edit: Used to modify attributes of the partit ion currently selected in the Partitions section. Selecting Edit opens a dialog b ox. Some or all of the fields can be edited, depending on whether the partition i nformation has already been written to disk. * You can also edit free space as represented i n the graphical display to create a new partition within that space. Either hi ghlight the free space and then select the Edit button, or double-click on the free space to edit it. * Delete: Used to remove the partition currentl y highlighted in the Current Disk Partitions section. You will be asked to confi rm the deletion of any partition. * Reset: Used to restore Disk Druid to its orig inal state. All changes made will be lost if you Reset the partitions. * RAID: Used to provide redundancy to any or al l disk partitions. It should only be used if you have experience using RAID. To read more about RAID, refer to the Red Hat Linux Customization Guide. To make a RAID device, you must first create software RAID partitions. Once you have crea ted two or more software RAID partitions, select RAID to join the software RAID p artitions into a RAID device. * LVM: Allows you to create an LVM logical volu me. The role of LVM (Logical Volume Manager) is to present a simple logical view of underlying physical storage space, such as a hard drive(s). LVM manages individual physical disks — or to be more precise, the individual partition s present on them. To create an LVM logical volume, you must first create partit ions of type physical volume (LVM). Once you have created one or more physical v olume (LVM) partitions, select LVM to create an LVM logical volume. Partition Fields Above the partition hierarchy are labels which pres ent information about the partitions you are creating. The labels are defined as follows: * Device: This field displays the partition's d evice name. * Mount Point/RAID/Volume: A mount point is the location within the directory hierarchy at which a volume exists; the v olume is "mounted" at this location. This field indicates where the partition will be mounted. If a partition exists, but is not set, then you need to define its mount point. Double-click on the partition or click the Edit but ton. *

Page 133: Linux-Training-Volume1

Type: This field shows the partition's type ( for example, ext2, ext3, or vfat). * Format: This field shows if the partition bei ng created will be formatted. * Size (MB): This field shows the partition's s ize (in MB). * Start: This field shows the cylinder on your hard drive where the partition begins. * End: This field shows the cylinder on your ha rd drive where the partition ends. 6.2.3.3). Recommended Partitioning Scheme Unless you have a reason for doing otherwise, you c an use the following partitioning scheme * A swap partition (at least 32MB) — swap par titions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. The size of your swap partition should be equal to twice your comput er's RAM, or 32MB, whichever amount is larger. * A /boot partition (100MB) — the partition m ounted on /boot contains the operating system kernel (which allows your system t o boot Red Hat Linux), along with files used during the bootstrap process. For m ost users, a 100MB boot partition is sufficient. 6.2.3.4). Adding Partitions The following fields need to be taken care off whil e creating new partitions. * Mount Point: Enter the partition's mount poin t. For example, if this partition should be the root partition, enter /; en ter /boot for the /boot partition, and so on. You can also use the pull-dow n menu to choose the correct mount point for your partition. *

Page 134: Linux-Training-Volume1

File System Type(ext2 or ext3 or swap) : Usin g the pull-down menu, select the appropriate file system type for this partition . * Allowable Drives: This field contains a list of the hard disks installed on your system. If a hard disk's box is highlighted , then a desired partition can be created on that hard disk. * Size (Megs): Enter the size (in megabytes) of the partition. Note, this field starts with 100 MB; unless changed, only a 10 0 MB partition will be created. * Additional Size Options: Choose whether to ke ep this partition at a fixed size, to allow it to "grow" (fill up the available hard drive space) to a certain point, or to allow it to grow to fill any r emaining hard drive space available. * If you choose Fill all space up to (MB), you must give size constraints in the field to the right of this option. This allows you to keep a certain amount of space free on your hard drive for future use. * Force to be a primary partition: Select wheth er the partition you are creating should be one of the first four partitions on the hard drive. If unselected, the partition created will be a logical partition * Check for bad blocks: Checking for bad blocks can help prevent data loss by locating the bad blocks on a drive and making a list of them to prevent using them in the future. * Selecting Check for bad blocks may dramatical ly increase your total installation time * Ok: Select Ok once you are satisfied with the settings and wish to create the partition.

Page 135: Linux-Training-Volume1

* Cancel: Select Cancel if you do not want to c reate the partition. 6.2.4. Boot Loader Configuration * A boot loader is the first software program t hat runs when a computer starts. * It is responsible for loading and transferrin g control to the operating system kernel software. The kernel, in turn, initia lizes the rest of the operating system. The installation program provides two boot loaders for you to choose from, GRUB and LILO. * GRUB (Grand Unified Bootloader), which is ins talled by default, is a very powerful boot loader. GRUB can load a variety of fr ee operating systems, as well as proprietary operating systems with chain-loading (the mechanism for loading unsupported operating systems, such as DOS or Windo ws, by loading another boot loader). * LILO (Linux Loader) is a versatile boot loade r for Linux. It does not depend on a specific file system, can boot Linux ke rnel images from floppy diskettes and hard disks, and can even boot other o perating systems. * If you do not want to install GRUB as your bo ot loader, click Change boot loader. You can then choose to install LILO or choo se not to install a boot loader at all. * If you already have a boot loader that can bo ot Linux and do not want to overwrite your current boot loader, or if you plan to boot the system using boot diskettes, choose “Do not install a boot loader†� by clicking on the Change boot loader button.

Page 136: Linux-Training-Volume1

* Boot loader Label : Every bootable partition is listed, including partitions used by other operating systems. The par tition holding the system's root file system will have a Label of Red Hat Linux (for GRUB) or linux (for LILO). If you would like to add or change the boot label for other partitions that have been detected by the installation program , click once on the partition to select it. Once selected, you can change the boo t label by clicking the Edit button. * Default Boot Partition : Select Default besid e the preferred boot partition to choose your default bootable OS. You w ill not be able to move forward in the installation unless you choose a def ault boot image. * Boot Loader Password : If you choose to use a boot loader password to enhance your system security, be sure to select the checkbox labeled Use a boot loader password. Once selected, enter a password an d confirm it. 6.2.4.1). Advanced Boot Loader Configuration Now that you have chosen which boot loader to insta ll, you can also determine where you want the boot loader to be installed. You may install the boot loader in one of two places: * The master boot record (MBR) o This is the recommended place to instal l a boot loader. The MBR is a special area on your hard drive that is automatical ly loaded by your computer's BIOS, and is the earliest point at which the boot l oader can take control of the boot process. o If you install it in the MBR, when your machine boots, GRUB (or LILO) will present a boot prompt. You can then boot Red Hat Linux or any other operating system that you have configured the boot loader to boot. * The first sector of your boot partition

Page 137: Linux-Training-Volume1

* This is recommended if you are already using another boot loader on your system. In this case, your other boot loader will t ake control first. * You can then configure that boot loader to st art GRUB (or LILO), which will then boot Red Hat Linux. * If your system will use only Red Hat Linux, y ou should choose the MBR. For systems with Windows 95/98, you should also install the boot loader to the MBR so that it can boot both operating systems. * The Force LBA32 (not normally required) optio n allows you to exceed the 1024 cylinder limit for the /boot partition. If you have a system which supports the LBA32 extension for booting operating systems a bove the 1024 cylinder limit, and you want to place your /boot partition above cy linder 1024, you should select this option. * If you wish to add default options to the boo t command, enter them into the Kernel parameters field. Any options you enter will be passed to the Linux kernel every time it boots. 6.2.5. Network Configuration The installation program will automatically detect any network devices you have and display them in the Network Devices list. * Once you have selected a network device, clic k Edit. From the Edit Interface pop-up screen, you can choose to configur e the IP address and Netmask of the device and you can choose to activate the de vice at boot time. If you select Activate on boot, your network interface wil l be started when you boot. * If you have a hostname (fully qualified domai n name) for the network device, you can choose to have DHCP (Dynamic Host C onfiguration Protocol)

Page 138: Linux-Training-Volume1

automatically detect it or you can manually enter t he hostname in the field provided. * Finally, if you entered the IP and Netmask in formation manually, you may also enter the Gateway address and the Primary, Sec ondary, and Tertiary DNS addresses. 6.2.6. Firewall Configuration Red Hat Linux offers firewall protection for enhanc ed system security. A firewall exists between your computer and the netwo rk, and determines which resources on your computer remote users on the netw ork can access. A properly configured firewall can greatly increase the securi ty of your system. * You can choose the appropriate security level for your system as high . medium or no firewall. * Trusted Devices : Selecting any of the Truste d Devices allows access to your system for all traffic from that device; it is excluded from the firewall rules. * Allow Incoming : Enabling these options allow the specified services to pass through the firewall. Note, during a workstati on installation, the majority of these services are not installed on the system. * Other ports : You can allow access to ports w hich are not listed here, by listing them in the Other ports field. Use the foll owing format: port:protocol. For example, if you want to allow IMAP access throu gh your firewall, you can specify imap:tcp. 6.2.7. Language Support Selection You must select a language to use as the default la nguage. The default language will be used on the system once the installation is complete. 6.2.8. Time Zone Configuration

Page 139: Linux-Training-Volume1

You can set your time zone by selecting your comput er's physical location. 6.2.9. Set Root Password Setting up a root account and password is one of th e most important steps during your installation. The installation program will pr ompt you to set a root password for your system. You must enter a root pas sword. The installation program will not let you proceed to the next sectio n without entering a root password. 6.2.10. Authentication Configuration You may skip this section if you will not be settin g up network passwords. * Enable MD5 passwords — allows a long passwo rd to be used (up to 256 characters), instead of the standard eight characte rs or less. * Enable shadow passwords — provides a secure method for retaining passwords. The passwords are stored in /etc/shadow, which can only be read by root. * Enable NIS — allows you to run a group of c omputers in the same Network Information Service domain with a common password a nd group file. You can choose from the following options: * NIS Domain — allows you to specify the doma in or group of computers your system belongs to. * Use broadcast to find NIS server — allows y ou to broadcast a message to your local area network to find an available NIS se rver. * NIS Server — causes your computer to use a specific NIS server, rather than broadcasting a message to the local area netwo rk asking for any available server to host your system. *

Page 140: Linux-Training-Volume1

Note : If you have selected a medium or high firewall to be setup during this installation, network authentication methods ( NIS and LDAP) will not work. * Enable LDAP — tells your computer to use LD AP for some or all authentication. LDAP consolidates certain types of information within your organization. * Enable Kerberos — Kerberos is a secure syst em for providing network authentication services * Enable SMB Authentication — Sets up PAM to use an SMB server to authenticate users. You must supply two pieces of i nformation here: o SMB Server — Indicates which SMB serv er your workstation will connect to for authentication. o SMB Workgroup — Indicates which workg roup the configured SMB servers are in. 6.2.11. Package Group Selection Unless you choose a custom installation, the instal lation program will automatically choose most packages for you. * To select packages individually, check “Cus tomize the set of packages to be installed†� checkbox. * You can select package groups like Desktop ( X, GNOME, KDE), Editors ( emacs, joe), Open Office, applications like Apache, mysql, ftp etc. * You can choose to view the individual package s in Tree View or Flat View. Tree View allows you to see the packages grouped by application type. Flat View allows you to see all of the packages in an alphabe tical listing on the right of the screen.

Page 141: Linux-Training-Volume1

* Unresolved Dependencies : If any package requ ires another package which you have not selected to install, the program prese nts a list of these unresolved dependencies and gives you the opportuni ty to resolve them. Under the list of missing packages, you can enable the option to Install packages to satisfy dependencies * You should now see a screen preparing you for the installation of Red Hat Linux and the installation will continue to install the packages selected. 6.2.12. Boot Diskette Creation To create a boot diskette, insert a blank, formatte d diskette into your diskette drive and click Next. f you do not want to create a boot diskette, make sure to select the appropriate option before you click Next . 6.2.13. Hardware Configuration * The installation program will now present a l ist of video cards for you to choose from. If you decided to install the X Window System packages, you now have the opportunity to configure an X server for y our system. * You can also select Skip X Configuration if y ou would rather configure X after the installation or not at all. X Configuration — Monitor and Customization * The installation program will present you wit h a list of monitors to select from. From this list, you can either use the monitor that is automatically detected for you, or choose another m onitor. * Choose the correct color depth and resolution for your X configuration. Also choose the login type as graphical or text. Pe rsonal desktop and workstation installations will automatically boot i nto a graphical environment. 6.2.14. Installation Complete

Page 142: Linux-Training-Volume1

Congratulations! Your Red Hat Linux 9 installation is now complete! The installation program will prompt you to prepare you r system for reboot. Remember to remove any installation media (diskette in the d iskette drive or CD in the CD-ROM drive) if they are not ejected automatically upon reboot. The first time you start your Red Hat Linux machine , you will be presented with the Setup Agent, which guides you through the Red H at Linux configuration. Using this tool, you can set your system time and date, i nstall software, register your machine with Red Hat Network, and more. The Se tup Agent lets you configure your environment at the beginning, so that you can get started using your Red Hat Linux system quickly. 6.3. System Administration Commands 6.3.1. Process Management * Linux is a multiprocessing operating system. Each process is a separate task with its own rights and responsibilities. If o ne process crashes it will not cause another process in the system to crash. * Each individual process runs in its own virtu al address space and is not capable of interacting with another process except through secure, kernel-managed mechanisms. * During the lifetime of a process it will use many system resources. It will use the CPUs in the system to run its instruct ions and the system's physical memory to hold it and its data. * Linux must keep track of the process itself a nd of the system resources that it has so that it can manage it and the other processes in the system fairly. * The most precious resource in the system is t he CPU, usually there is only one. Linux is a multiprocessing operating system, i ts objective is to have a process running on each CPU in the system at all ti mes, to maximize CPU utilization. *

Page 143: Linux-Training-Volume1

Multiprocessing is a simple idea; a process i s executed until it must wait, usually for some system resource; when it has this resource, it may run again. In a uniprocessing system, for example DOS, the CPU would simply sit idle and the waiting time would be wasted. In a multipro cessing system many processes are kept in memory at the same time. * Whenever a process has to wait the operating system takes the CPU away from that process and gives it to another, more des erving process. It is the scheduler which chooses which is the most appropria te process to run next and Linux uses a number of scheduling strategies to ens ure fairness. * As well as the normal type of process, Linux supports real time processes. These processes have to react very quickly to exter nal events (hence the term "real time") and they are treated differently from normal user processes by the scheduler. 6.3.1.1). Process task_struct data structure * Each process is represented by a task_struct data structure (task and process are terms that Linux uses interchangeably). The task vector is an array of pointers to every task_struct data structure in the system. * This means that the maximum number of process es in the system is limited by the size of the task vector; by default it has 5 12 entries. * As processes are created, a new task_struct i s allocated from system memory and added into the task vector. To make it e asy to find, the current, running, process is pointed to by the current point er. * Although the task_struct data structure is qu ite large and complex, but its fields can be divided into a number of function al areas: 1. Process States

Page 144: Linux-Training-Volume1

As a process executes, it changes state according t o its circumstances. Linux processes have the following states: 1. Runnable( process state code : R) : The process is either running (it is the current process in the system) or it is ready to run (it is waiting to be assigned to one of the system's CPUs). 2. Waiting/Sleeping (process state code : D/S) : The process is waiting for an event or for a resource. Linux differentiate s between two types of waiting process; interruptible and uninterruptible. * Interruptible waiting processes can be interr upted by signals(S). * Uninterruptible waiting processes are waiting directly on hardware conditions and cannot be interrupted under any circ umstances(D). 3. Stopped (T): The process has been stopp ed, usually by receiving a signal. A process that is being debugged can be in a stopped state. 4. Zombie/Defunct(Z) : This is a halted pr ocess which, for some reason, still has a task_struct data structure in the task vector. It is what it sounds like, a dead process. 2. Scheduling Information The scheduler needs this information in order to fa irly decide which process in the system most deserves to run. * Processes are always making system calls and so may often need to wait. Even so, if a process executes until it waits then it still might use a disproportionate amount of CPU time and so Linux us es pre-emptive scheduling. * In this scheme, each process is allowed to ru n for a small amount of time, 200ms, and, when this time has expired another proc ess is selected to run and

Page 145: Linux-Training-Volume1

the original process is made to wait for a little w hile until it can run again. This small amount of time is known as a time-slice. * It is the scheduler which must select the mos t deserving process to run out of all of the runnable processes in the system. * Linux uses a reasonably simple priority based scheduling algorithm to choose between the current processes in the system. * When it has chosen a new process to run it sa ves the state of the current process, the processor specific registers and other context being saved in the processes task_struct data structure. * For the scheduler to fairly allocate CPU time between the runnable processes in the system it keeps information in the task_struct for each process. * priority : This is the priority that the sche duler will give to this process. It is also the amount of time (in jiffies ) that this process will run for when it is allowed to run. You can alter the pr iority of a process using system calls and the renice command. * In an SMP (Symmetric Multi-Processing) linux system,the kernel is capable of evenly balancing work between the many CPUs in t he system. Nowhere is this balancing of work more apparent than in the schedul er. * In an SMP system each processes task_struct c ontains the number of the processor that it is currently running on (processo r ) and its processor number of the last processor that it ran on (last_processo r ). There is no reason why a process should not run on a different CPU each time it is selected to run but Linux can restrict a process to one or more process ors in the system using the processor_mask. 3. Identifiers

Page 146: Linux-Training-Volume1

* Every process in the system has a process ide ntifier. * Each process also has User and group identifi ers, these are used to control this processes access to the files and devi ces in the system. 4. Inter-Process Communication * Linux supports IPC mechanisms of signals, pip es and semaphores and also the System V IPC mechanisms of shared memory, semap hores and message queues. * Signals are one of the oldest inter-process c ommunication methods and are used to signal asynchronous events to one or more p rocesses. A signal could be generated by a keyboard interrupt or an error condi tion such as the process attempting to access a non-existent location in its virtual memory. Signals are also used by the shells to signal job control comma nds to their child processes. * Refer url below for more details on InterProc ess Communication methods. http://www.science.unitn.it/~fiorella/guidelinux/tl k/node52.html 5. Links * In a Linux system no process is independent o f any other process. Every process in the system, except the initial process h as a parent process. * You can see the family relationship between t he running processes in a Linux system using the pstree command: init(1)-+-crond(98) |-emacs(387)

Page 147: Linux-Training-Volume1

|-gpm(146) |-inetd(110) |-kerneld(18) |-kflushd(2) |-klogd(87) |-kswapd(3) |-login(160)---bash(192)---emacs(225) |-lpd(121) |-mingetty(161) |-mingetty(162) |-mingetty(163) |-mingetty(164) |-login(403)---bash(404)---pstree(594) |-sendmail(134) |-syslogd(78) `-update(166) 6. Times and Timers * The kernel keeps track of a processes creatio n time as well as the CPU time that it consumes during its lifetime. * Each clock tick, the kernel updates the amoun t of time in jiffies that the current process has spent in system and in user mod e. * Linux also supports process specific interval timers, processes can use system calls to set up timers to send signals to th emselves when the timers expire. These timers can be single-shot or periodic timers. 7. File system

Page 148: Linux-Training-Volume1

* Processes can open and close files as they wi sh and the processes task_struct contains pointers to descriptors for ea ch open file as well as pointers to two VFS inodes. * Each VFS inode uniquely describes a file or d irectory within a file system and also provides a uniform interface to the underl ying file systems . * The first is to the root of the process (its home directory) and the second is to its current or pwd directory. These tw o VFS inodes have their count fields incremented to show that one or more process es are referencing them. * This is why you cannot delete the directory t hat a process has as its pwd directory set to, or for that matter one of its sub -directories. 8. Virtual memory * Most processes have some virtual memory (kern el threads and daemons do not) and the Linux kernel must track how that virtu al memory is mapped onto the system's physical memory. 9. Processor Specific Context and Context Switching * A process could be thought of as the sum tota l of the system's current state. * Whenever a process is running it is using the processor's registers, stacks and so on. This is the processes context and , when a process is suspended, all of that CPU specific context must be saved in the task_struct for the process. When a process is restarted by the sch eduler its context is restored from here. *

Page 149: Linux-Training-Volume1

Context switching is the series of procedures to switch the control of CPU from current process to a certain process. While th e context switching, the operating system saves the context of current proce ss and restores the context of the next process which is decided by the schedul er as per the info stored in the tast_struct for that process. Process monitoring is an important function of a Li nux system administrator. To that end, ps and top are two of the most useful com mands. 6.3.1.2). ps The ps command provides a snapshot of the currently running processes. The simplest form of ps is : $ ps PID TTY TIME CMD 3884 pts/1 00:00:00 bash 3955 pts/2 00:00:00 more 3956 pts/5 00:00:05 sqlplus * The PID is the identification number for the process. * TTY is the terminal console to which the proc ess belongs. * The TIME column is the total CPU time used by the process. * The CMD column lists the command line being e xecuted. $ ps -ef | grep oracle UID PID PPID C STIME TTY TIME CMD oracle 1633 1 0 13:58 ? 00:00:00 ora_pmon_ora1 oracle 1635 1 0 13:58 ? 00:00:00 ora_dbw0_ora1 oracle 1637 1 0 13:58 ? 00:00:01 ora_lgwr_ora1 oracle 1639 1 0 13:58 ? 00:00:02 ora_ckpt_ora1 oracle 1641 1 0 13:58 ? 00:00:02 ora_smon_ora1

Page 150: Linux-Training-Volume1

* Although uid usually refers to a numeric iden tification, the username is specified under the first column, labeled UID. * PPID is the identification number for the par ent process. For the Oracle processes, this is 1- which is the id of the init p rocess, the parent process of all processes, because Oracle is set up on this sys tem to be started as a part of the login process. * The column labeled C is a factor used by the CPU to compute execution priority. * STIME refers to the start time of the process . * The question marks indicate that these proces ses don't belong to any TTY because they were started by the system. Here is another example of the ps command with some different options. Notice that many of the columns are the same as they were when ps was executed with -ef: $ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND carma 4024 0.0 0.2 2240 1116 pts/1 S 20:59 0:00 su carma carma 4025 0.0 0.3 2856 1668 pts/1 S 20:59 0:00 bas h carma 4051 0.0 0.2 2488 1504 pts/1 R 21:01 0:00 ps aux carma 4052 0.0 0.1 1636 600 pts/1 S 21:01 0:00 grep carma * The above ps option gives the username under which the process is running. It also gives the current status (STAT) of the proc ess. * Regular users can see all system processes, b ut they can only kill processes that they own.

Page 151: Linux-Training-Volume1

To see if a particular process is running or not, y ou can use $ ps –aux |grep mysql 6.3.1.3). top Ps only gives you a snapshot of the current process es. For an ongoing look at the most active processes, use top. * Top provides process information in real time . It also has an interactive state that allows users to enter commands, such as n followed by a number such as 5 or 10. The result will be to instruct top to d isplay the 5 or 10 most active processes. Top runs until you press "q" to q uit top. * It can sort the tasks by CPU usage, memory us age and runtime. $ top –c ------- will display the processes sorte d by the order of their cpu usage. Here is a partial display of top: $ top –c 15:10:31 up 2 days, 2:34, 5 users, load average: 0. 00, 0.03, 0.15 Tasks: 78 total, 2 running, 76 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7% us, 0.3% sy, 0.0% ni, 99.0% id, 0.0% w a, 0.0% hi, 0.0% si Mem: 248980k total, 244496k used, 4484k free, 2196k buffers Swap: 522072k total, 216056k used, 306016k free, 61 872k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMA ND 2014 root 15 0 226m 26m 143m S 0.3 11.0 176:49.13 X 5030 root 15 0 190m 78m 35m S 0.3 32.5 25:19.13 moz illa-bin 9499 carma 16 0 2612 904 1620 R 0.3 0.4 0:00.02 top 1 root 16 0 2096 336 1316 S 0.0 0.1 0:04.86 init 2 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 3 root 5 -10 0 0 0 S 0.0 0.0 0:00.23 events/0

Page 152: Linux-Training-Volume1

4 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 6 root 5 -10 0 0 0 S 0.0 0.0 0:00.01 khelper 5 root 15 0 0 0 0 S 0.0 0.0 0:00.00 khubd 7 root 15 0 0 0 0 S 0.0 0.0 0:00.08 pdflush 8 root 15 0 0 0 0 S 0.0 0.0 0:00.16 pdflush 10 root 14 -10 0 0 0 S 0.0 0.0 0:00.00 aio/0 9 root 15 0 0 0 0 S 0.0 0.0 0:01.77 kswapd * The display is updated every 5 seconds by def ault, but you can change that with the d command-line option. Field Descriptions * "uptime" : The first line displays the time t he system has been up, and the three load averages for the system. * The load averages are the average number of p rocess ready to run during the last 1, 5 and 15 minutes. This line is just lik e the output of uptime command. * Tasks are the total number of processes runni ng at the time of the last update. This is also broken down into the number of tasks which are running, sleeping, stopped, or undead. The processes and sta tes display may be toggled by the ‘t’ interactive command. * Cpu(s) : "CPU states" shows the percentage of CPU time in user mode, system mode, niced tasks, iowait and idle. (Niced t asks are only those whose nice value is positive.) Time spent in niced tasks will also be counted in system and user time, so the total will be more tha n 100. * Mem: Statistics on memory usage, including to tal available memory, free memory, used memory, shared memory, and memory used for buffers. The display of memory information may be toggled by the m interact ive command. * Swap : Statistics on swap space, including to tal swap space, available swap space, and used swap space. This and Mem are j ust like the output of free command. * PID : The process ID of each task. *

Page 153: Linux-Training-Volume1

PPID: The parent process ID each task. * UID : The user ID of the task's owner. * USER : The user name of the task's owner. * PRI: The priority of the task. * NI : The nice value of the task which decides the prioirity of the task with the scheduler. Negative nice values are higher priority. * %CPU : The task's share of the CPU time since the last screen update, expressed as a percentage of total CPU time per pro cessor. * %MEM : The task's share of the physical memor y. * COMMAND : The task's command name, which will be truncated if it is too long to be displayed on one line. Tasks in memory w ill have a full command line, but swapped-out tasks will only have the name of th e program in parentheses (for example, "(getty)†�) 6.3.1.4). pstree pstree displays a tree of processes. The tree is ro oted at either pid or init if pid is omitted. If a user name is specified, all pr ocess trees rooted at processes owned by that user are shown. * pstree visually merges identical branches by putting them in square brackets and prefixing them with the repetition cou nt, e.g. init-+-getty |-getty |-getty `-getty becomes init---4*[getty] $ pstree *

Page 154: Linux-Training-Volume1

Some of the options you can use with it are â €“n ( Sort processes by PID), -p (show PIDs) etc. 6.3.1.5). kill The command kill sends the specified signal to the specified process or process group. * If no signal is specified, the TERM signal is sent. The TERM signal will kill processes which do not catch this signal. * For other processes, it may be necessary to u se the KILL (9) signal, since this signal cannot be caught. $ kill [ -s signal] PID $ kill -9 PID * -s signal : Specify the signal to send. The s ignal may be given as a signal name or number. * You can get a list of all the system's signal s using the kill -l command $ kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 33) SIGRTMIN 34) SIGRTMIN+1 35) SIGRTMIN+2 36) SIGRTMIN+3 37) SIGRTMIN+4 38) SI GRTMIN+5 39) SIGRTMIN+6 40) SIGRTMIN+7 41) SIGRTMIN+8 42) SI GRTMIN+9 43) SIGRTMIN+10 44) SIGRTMIN+11 45) SIGRTMIN+12 46) SIGRTMIN+13 47) SIGRTMIN+14 48) SIGRTMIN+15 49) SIGRTMAX-15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10

Page 155: Linux-Training-Volume1

55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SI GRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SI GRTMAX-2 63) SIGRTMAX-1 * Pid can be process id or process name. But us e the process id itself with the -9 option. * $ kill 0 : will stop all process except your shell 6.3.1.6). killall kill processes by name . killall sends a signal to all processes running any of the specified commands. If no signal name is specif ied, SIGTERM is sent. * A killall process never kills itself (but may kill other killall processes). * $ killall –l :will also list all known sign al names. * Eg ‘$killall mysql’ will kill all mysql p rocesses. 6.3.1.7). fuser fuser displays the PIDs of processes using the spec ified files or file systems. In the default display mode, each file name is foll owed by a letter denoting the type of access. $ fuser -a /var/log/messages Will output the PID that is accessing the file at p resent. By default, only files that are accessed by at least one process are shown. * The ‘k’ option can be used to kill proces ses accessing a file system. $ fuser -km /home * In the default display mode, each file name i s followed by a letter denoting the type of access:

Page 156: Linux-Training-Volume1

$ fuser –m /var/log/messages c : current directory. e : executable being run. f : open file. f is omitted in default display mode . r : root directory. m: map'ed file or shared library 6.3.1.8). pidof This command find the process ID of a running progr am. $ pidof httpd Will list all the process ids under which Apache ru ns. 6.3.1.9). skill Skill is similar to kill. The default signal for sk ill is TERM. Use -l or -L to list available signals. Particularly useful signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specifi ed in three ways: -9 -SIGKILL -KILL. * $ kill [signal to send] [options] process sel ection criteria * PROCESS SELECTION OPTIONS : Selection criteri a can be: terminal, user, pid, command. The options below may be used to ensu re correct interpretation. -t The next argument is a terminal (tty or pty). -u The next argument is a username. -p The next argument is a process ID number. -c The next argument is a command name. $ skill -KILL pts/* ========= Kill users on PTY dev ices $ skill -STOP user1 user2 ======== Stop 2 users 6.3.1.10). Background Process - & & at the end of the command makes it run in the bac kground.

Page 157: Linux-Training-Volume1

$ opera & 6.3.1.11). nice * Nice command invokes a command with an altere d scheduling priority. * The general syntax is $ nice [-increment | -n increment ] command [argume nt ... ] * Increment Range goes from -20 (highest priori ty) to 19 (lowest). * Command is the name of a command that is to b e invoked. If no command or arguments are given, `nice' prints the current sche duling priority, which is inherited. * Argument is any string to be supplied as an a rgument when invoking a command. The commandline below runs the pico command on myfi le.txt with an increment of +13. ie the priority or niceness value of the pico command is reduced by 13. $ nice +13 pico myfile.txt 6.3.1.12). snice * The snice command is similar to the nice comm and but the default priority for snice is +4. (snice +4 ...) * Priority numbers range from +20 (slowest) to -20 (fastest). Negative priority numbers are restricted to administrative u sers. $ snice netscape crack +7 ----- Slow down netscape and crack $ snice -17 root bash ----- Give priority to root's shell 6.3.1.13). /proc/$PID directory * /proc is a pseudo-filesystem which is used as an interface to kernel data structures. * There is a numerical subdirectory for each ru nning process under /proc; the subdirectory is named by the process ID. For ex ample, if the subdirectory is 14534, the directory is /proc/14534. *

Page 158: Linux-Training-Volume1

Some of the pseudo-files and directories cont ainted inside the /proc/$PID directory is detailed below: o cmdline : This holds the complete comma nd line for the process,unless the whole process has been swapped o ut, or unless the process is a zombie. In either of these later cases there is n othing in this file: i.e. a read on this file will return 0 characters. o cwd : This is a link to the current wor king directory of the process. To find out the cwd of process 20, instanc e, you can do this, $ cd /proc/20/cwd; /bin/pwd o environ : This file contains the enviro nment for the process. The entries are separated by null characters, and there may be a null character at the end. Thus, to print out the environment of proc ess 1, you could do: $ cat /proc/1/environ o exe : exe is a symbolic link containing the actual path name of the executed command. o fd : This is a subdirectory containing one entry for each file which the process has open, named by its file descriptor, and which is a symbolic link to the actual. Thus, 0 is standard input, 1 standar d output, 2 standard error, etc. o stat : Status information about the pro cess. This is used by ps and top. 6.3.2. System Startup and Shutdown 6.3.2.1). The Boot Process 1. The Bootstrap Process – First Stage (BIOS) * The PC boot process is started on powerup. Th e processor will start execution of code contained in the Basic Input and Output System (BIOS). The BIOS is a program stored in Read Only Memory (ROM) and is the lowest level interfae between the computer and peripherals. * BIOS then does the Power On Self Test, or POS T routine runs to find certain hardware and to test that the hardware is w orking at a basic level. It

Page 159: Linux-Training-Volume1

compares the hardware settings in the CMOS (Complem entary Metal Oxide Semiconductor) to what is physically on the system. It then initialize the hardware devices. * Once the POST is completed, the hardware jump s to a specific, predefined location in RAM. The instructions located here are relatively simple and basically tell the hardware to go look for a boot d evice. Depending on how your CMOS is configured, the hardware first checks your floppy and then your hard disk. * When a boot device is found (let's assume tha t it's a hard disk), the hardware is told to go to the 0th (first) sector (c ylinder 0, head 0, sector 0), then load and execute the instructions there. This is the master boot record, or MBR . * The BIOS will first load the MBR into memory which is only 512 bytes in size and points to the boot loader (LILO: Linux boo t loader) or GRUB. * Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it. 1. The Boot Loader – Stage 2 * LILO or GRUB allows the root user to set up t he boot process as menu-driven or command-line, and permits the user to cho ose from amongst several boot options. * It also allows for a default boot option afte r a configurable timeout, and current versions are designed to allow booting from broken Level 1 (mirrored) RAID arrays. * It has the ability to create a highly configu rable, "GUI-fied" boot menu, or a simple, text-only, command-line prompt. * Depending on the kernel boot option chosen or set as default, lilo or grub will load that kernel .

Page 160: Linux-Training-Volume1

2. Kernel Loading – Stage 3 * When the kernel is loaded, it immediately ini tializes and configures the computer's memory and configures the various hardwa re attached to the system, including all processors, I/O subsystems, and stora ge devices. * It then looks for the compressed initrd image in a predetermined location in memory, decompresses it, mounts it, and loads al l necessary drivers. * Next, it initializes virtual devices related to the file system, such as LVM or software RAID before unmounting the initrd d isk image and freeing up all the memory the disk image once occupied. * The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory. * At this point, the kernel is loaded into memo ry and operational. 4. Final Stage - Init * The first thing the kernel does after complet ing the boot process is to execute init program. * The /sbin/init program (also called init) coo rdinates the rest of the boot process and configures the environment for the user . *

Page 161: Linux-Training-Volume1

Init is the root/parent of all processes exec uting on Linux which becomes process number 1. * When the init command starts, it becomes the parent or grandparent of all of the processes that start up automatically on a R ed Hat Linux system. * Based on the appropriate run-level in the /et c/inittab file , scripts are executed to start various processes to run the syst em and make it functional. 6.3.2.2). The Init Program * As seen in the previous section, the kernel w ill start a program called init or /sbin/init * The init process is the last step in the boot procedure and identified by process id "1". * The init command then runs the /etc/inittab s cript. * The first thing init runs out of the inittab is the script /etc/rc.d/rc.sysinit , which sets the environment p ath, starts swap, checks the file systems, and takes care of everything the syst em needs to have done at system initialization. * Next, init looks through /etc/inittab for the line with initdefault in the third field. The initdefault entry tells the system what run-level to enter initially. id:5:initdefault: ( 5 is the default runlevel) * Depending on the run level, the init program starts all of the background processes by using scripts from the appropriate rc directory for the runlevel. o The rc directories are numbered to corr espond to the runlevel they represent. o For instance, /etc/rc.d/rc5.d/ is the d irectory for runlevel 5. o

Page 162: Linux-Training-Volume1

The scripts are found in the directory /etc/rc.d/rc#.d/ where the symbol # represents the run level. # ls /etc/rc.d/rc5.d/ ./ K70aep1000 S12syslog S80antirelayd S95cpanel ../ K70bcm5820 S17keytable S80chkservd S97rhnsd K05saslauthd K74nscd S20random S80exim S98ports entry K20nfs S05kudzu S25netfs S85httpd S99local@ K24irda S08iptables S28autofs S85postgresql S99 nagios K25squid S09isdn S40proftpd S90crond K35winbind S10network S55sshd S90mysql K45named S11filelimits S56rawdevices S95anacron K50tux S11ipaliases S56xinetd S95bandmin * Scripts beginning with S denote startup scrip ts while scripts beginning with K denote shutdown (kill) scripts. * Numbers follow these letters to denote the or der of execution. (lowest to highest) * Adding a script to the /etc/rc.d/rc#.d/ direc tory with either an S or K prefix, adds the script to the boot or shutdown pro cess o Hence these scripts are executed to sta rt all the system services which starts at S for run level 5 in the example ab ove. * One of the last things the init program execu tes is the /etc/rc.d/rc.local file. This file is useful for system customization. * Ading commands to this script is an easy way to perform necessary tasks like starting special services or initialize device s without writing complex initialization scripts in the /etc/rc.d/init.d/ dir ectory and creating symbolic links. * Init typically will start multiple instances of "getty" which waits for console logins which spawn one's user shell process . *

Page 163: Linux-Training-Volume1

Upon system shutdown init controls the sequen ce and processes for shutdown. The init process is never shut down. It i s a user process and not a kernel system process although it does run as root. The order in which the init program executes the in itialization scripts is below: 1. /etc/inittab 2. /etc/rc.d/rc.sysinit 3. Scripts under /etc/rc.d/rc3.d/ - Note: we are running runlevel 3 here. 4. /etc/rc.d/rc.local 6.3.2.3). Runlevels Linux utilizes what is called "runlevels". A runlev el is a software configuration of the system that allows only a sele cted group of processes to exist. * Init can run the system in one of eight runle vels. These runlevels are 0-6 and S or s. The system runs in only one of these ru nlevels at a time. Typically these runlevels are used for different purposes. * Runlevels 0, 1, and 6 are reserved. For Redha t Linux version 6 and above , the runlevels are: Runlevels State 0 Shutdown 1

Page 164: Linux-Training-Volume1

Single User Mode 2 Multi user with no network services activated 3 Default text start. Full multi user .No GUI 4 Reserved for local use. With X-windows and mult i user 5 XDM X-windows with network support. Full multi- user 6 Reboot S or s Single User/Maintenance mode The inittab file The "/etc/inittab" file tells init which runlevel t o start the system at and describes the processes to be run at each runlevel. An entry in the inittab file has the following form at: id:runlevels:action:process * id - A unique sequence of 1-4 characters whic h identifies an entry in inittab. * runlevels - Lists the runlevels for which the specified action should be taken. This field may contain multiple characters f or different runlevels allowing a particular process to run at multiple ru nlevels. For example, 123 specifies that the process should be started in run levels 1, 2, and 3.

Page 165: Linux-Training-Volume1

* process - Specifies the process to be execute d * action - Describes which action should be tak en. Some of the actions are listed below : o respawn - The process will be restarted whenever it terminates. o wait - The process will be started once when the specified runlevel is entered and init will wait for its termination. o boot - The process will be executed dur ing system boot. The runlevels field is ignored. o off - This does nothing. o initdefault - Specifies the runlevel wh ich should be entered after system boot. If none exists, init will ask for a ru nlevel on the console. The process field is ignored. o sysinit - The process will be executed during system boot. It will be executed before any boot or bootwait entries. Th e runlevels field is ignored. o powerwait - The process will be execute d when init receives the SIGPWR signal. Init will wait for the process to fi nish before continuing. o powerfail - Same as powerwait but init does not wait for the process to complete. o ctrlaltdel - This process is executed w hen init receives the SIGINT signal. This means someone on the system console ha s pressed the "CTRL-ALT-DEL" key combination. 6.3.2.4). System Processes * The top 6 system processes with PIDs 1-6 are given below. System Processes:

Page 166: Linux-Training-Volume1

Process ID Description 1 Init Process 2 kflushd(bdflush) : Started by update - does a m ore imperfect sync more frequently 3 kupdate : Does a sync every 30 seconds 4 kpiod 5 kswapd 6 mdrecoveryd * Processes 2, 3, 4, 5 and 6 are kernel daemons . The kernel daemons are started after init, so they get process numbers lik e normal processes do. But their code and data lives in the kernel's part of t he memory. So what are these kernel daemons for? * Kflushd and Kupdate o Input and output is done via buffers in memory. This allows things to run faster and the data in the buffer are writte n to disk in larger more efficient chunks.

Page 167: Linux-Training-Volume1

o The daemons kflushd and kupdate handle this work. o kupdate runs periodically (5 seconds) t o check whether there are any dirty buffers. If there are, it gets kflushd to flu sh them to disk. * Kswap and Kpiod o System memory can be better managed by shifting unused parts of running programs out to the swap partition(s) of th e hard disk. o Moving this data in and out of memory a s needed is done by kpiod and kswapd. o Every second or so, kswapd wakes up to check out the memory situation, and if something on the disk is needed i n memory, or there is not enough free memory, kpiod is called in. * Mdrecoveryd * mdrecoveryd is part of the Multiple Devices p ackage used for software RAID and combining multiple disks into one virtual disk Basically it is part of the kernel. * It can be removed from the kernel by deselect ing it (CONFIG_BLK_DEV_MD) and recompiling the kernel. * Some of the other system services are discuss ed below: System Service Description anacron

Page 168: Linux-Training-Volume1

Run jobs which were scheduled for execution whi le computer was turned off. Catch up with system duties. arpwatch Keeps track of IP address to MAC address pairin gs autofs automounts file systems on demand. crond Job scheduler for periodic tasks. gpm Allows console terminal cut and paste. (Non X-w indow consoles) https Apache web server iptables Firewall rules interface to kernel. keytable Loads selected keyboard map as set in /etc/sysc onfig/keyboard kudzu New hardware probe/detection during system boot . lpd Network printer services mysqld Database services named name services (Bind)

Page 169: Linux-Training-Volume1

network Active network services during system boot. nfs Network file system syslog System log file facility ypbind NIS file sharing/authentication infrastructure service. ypserv NIS file sharing/authentication infrastructure service xfs X-Windows font server 6.3.2.5). The Linux Login Process After the system boots, at serial terminals or virt ual terminals, the user will see a login prompt similar to: machinename login: * This prompt is being generated by a program, usually getty or mingetty, which is regenerated by the init process every time a user ends a session on the console. * The getty program will call login, and login, if successful will call the users shell. The steps of the process are: o The init process spawns the getty proce ss. o

Page 170: Linux-Training-Volume1

The getty process invokes the login pro cess when the user enters their name and passes the user name to login. o The login process prompts the user for a password, checks it, then if there is success, the user's shell is started. O n failure the program displays an error message, ends and then init will respawn getty. o The user will run their session and eve ntually logout. On logout, the shell program exits and we return to step 1. o Note: This process is what happens for runlevel 3, but runlevel 5 uses some different programs to perform similar fun ctions. These X programs are called X clients. 6.3.2.6). Single – User Mode * If your system password is not working, you c an use the single user mode to reset the root password. * If your system boots, but does not allow you to log in when it has completed booting, try single-user mode. In single-user mode, you computer boots to runlevel 1. Your local filesystems will be mounted, but your network will not be activ ated. You will have a usable system maintenance shell. Booting to single-user mode in Grub * If you are using GRUB, use the following step s to boot into single-user mode: o If you have a GRUB password configured, type p and enter the password. o

Page 171: Linux-Training-Volume1

Select Red Hat Linux with the version o f the kernel that you wish to boot and type ‘e’ for edit. You will be present ed with a list of items in the configuration file for the title you just selec ted. o Select the line that starts with kernel and type ‘e’ to edit the line. o Go to the end of the line and type sing le as a separate word (press the [Spacebar] and then type single). Press [Enter] to exit edit mode. o Back at the GRUB screen, type ‘b’ t o boot into single user mode. Booting to single-user mode in Lilo * If you are using LILO, specify one of these o ptions at the LILO boot prompt (if you are using the graphical LILO, you mu st press [Ctrl]-[x] to exit the graphical screen and go to the boot: prompt): * boot: linux single * boot: linux emergency In emergency mode, you are booted into the most min imal environment possible. The root filesystem will be mounted read-only and a lmost nothing will be set up. The main advantage of emergency mode over linux sin gle is that your init files are not loaded. If init is corrupted or not working , you can still mount filesystems to recover data that could be lost duri ng a re-installation. 6.3.2.7). Shutting Down To shut down Red Hat Linux, issue the shutdown comm and. The format of the command is $ shutdown time warning-message The time argument is the time to shut down the syst em (in the format hh:mm:ss), and warning-message is a message displayed on all u ser's terminals before shutdown.

Page 172: Linux-Training-Volume1

Alternately, you can specify the time as “now'', to shut down immediately. The -r option may be given to shutdown to reboot the sy stem after shutting down. /sbin/shutdown -h now /sbin/shutdown -r now * You must run shutdown as root. After shutting everything down, the -h option will halt the machine, and the -r option wil l reboot. * Although the reboot and halt commands are now able to invoke shutdown if run while the system is in runlevels 1-5, it is a b ad habit to get into, as not all Linux-like operating systems have this feature. $ reboot $ halt * To shut down and reboot the system at 8:00 pm , use the command $ shutdown –r 20:00 6.3.3. Memory Management and Performance Monitoring 6.3.3.1). Virtual Memory / Swap Space * Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective size of usable memory gro ws correspondingly. * The kernel will write the contents of a curre ntly unused block of memory to the hard disk so that the memory can be used for another purpose. When the original contents are needed again, they are read b ack into memory. * This is all made completely transparent to th e user; programs running under Linux only see the larger amount of memory av ailable and don't notice that

Page 173: Linux-Training-Volume1

parts of them reside on the disk from time to time. The part of the hard disk that is used as virtual memory is called the swap s pace. * For this purpose, the swap partition is creat ed on the hard disk. * You can see the swap space as well as the cur rent memory available and usage using the command ‘free’ $ free 6.3.3.2). Swapping In and Swapping Out * Memory Page : One basic concept in the Linux implementation of virtual memory is the concept of a page. A page is a 4Kb ar ea of memory and is the basic unit of memory with which both the kernel and the C PU deal. Although both can access individual bytes (or even bits), the amount of memory that is managed is usually in pages. * When physical memory becomes scarce the Linux memory management subsystem must attempt to free physical pages. This task fall s to the kernel swap daemon (kswapd). * The kernel swap daemon is a special type of p rocess, a kernel thread. Kernel threads are processes that have no virtual m emory, instead they run in kernel mode in the physical address space. * Swapping in is the process in which a page in the virtual memory is brought back into the physical memory by the kwapd daemon. * Swapping out is the process where a page is s wapped out of physical memory into the system's swap files thereby freeing the ph ysical memory on the system. 6.3.3.3). Commands which show the current memory us age * free $ free $ free -m

Page 174: Linux-Training-Volume1

* top $ top * Print the output of /proc/meminfo $ cat /proc/meminfo ( detailed output) 6.3.3.4). Creating a swap space Criteria for a Swap file * A swap file is an ordinary file; it is in no way special to the kernel. * The only thing that matters to the kernel is that it has no holes, and that it is prepared for use with mkswap. It must re side on a local disk, however; it can't reside in a filesystem that has b een mounted over NFS due to implementation reasons. * The bit about holes is important. The swap fi le reserves the disk space so that the kernel can quickly swap out a page without having to go through all the things that are necessary when allocating a disk se ctor to a file. The kernel merely uses any sectors that have already been allo cated to the file. Because a hole in a file means that there are no disk sectors allocated (for that place in the file), it is not good for the kernel to try to use them. * One good way to create the swap file without holes is through the following command ‘dd’: $ dd if=/dev/zero of=/extra-swap bs=1024 count=1024 o bs is for bytes and count is for blocks . o dd is for converting and copying a file . o of is to write to file instead of writi ng to standard output o

Page 175: Linux-Training-Volume1

if is to read from file instead of from standard input o extra-swap is the name of the swap file and the size of is given after the count=. Swap Partition A swap can be created just like any other partition but it has to be of type 82 (Linux swap). Setting up Swap Space * After you have created a swap file or a swap partition, you need to write a signature to its beginning; this contains some ad ministrative information and is used by the kernel. The command to do this is mk swap, used like this: $ mkswap /extra-swap 1024 Setting up swapspace, size = 1044480 bytes 6.3.3.5). Using a Swap Space Note that the swap space which is setup is still no t in use yet: it exists, but the kernel does not use it to provide virtual memor y. * An initialized swap space is taken into use w ith ‘swapon’. This command tells the kernel that the swap space can be used. The path to the swap space is given as the argument, so to start swappin g on a temporary swap file one might use the following command. $ swapon /extra-swap * Swap spaces can be used automatically by list ing them in the /etc/fstab file. *

Page 176: Linux-Training-Volume1

The startup scripts will run the command swap on -a, which will start swapping on all the swap spaces listed in /etc/fsta b. Therefore, the swapon command is usually used only when extra swap is nee ded. $ swapon –a * You can get the swap info using free, ‘cat /proc/meminfo’ or top. * A swap space can be removed from use with swa poff. * All the swap spaces that are used automatical ly with swapon -a can be removed from use with swapoff -a; it looks at the f ile /etc/fstab to find what to remove. 6.3.3.6). Disk Buffering/ Buffer cache Why Disk Buffering? * Reading from a disk is very slow compared to accessing (real) memory. In addition, it is common to read the same part of a d isk several times during relatively short periods of time. * For example, one might first read an e-mail m essage, then read the letter into an editor when replying to it, then make the m ail program read it again when copying it to a folder. Or, consider how often the command ls might be run on a system with many users. * By reading the information from disk only onc e and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache. * Because of this, you should never turn off th e power without using a proper shutdown procedure. * The cache does not actually buffer files, but blocks, which are the smallest units of disk I/O (under Linux, they are u sually 1 kB). * The sync command flushes the buffer, i.e., fo rces all unwritten data to be written to disk.

Page 177: Linux-Training-Volume1

$ sync Linux Daemon bdflush * Linux has an additional daemon, bdflush, whic h does a more imperfect sync more frequently to avoid the sudden freeze due to h eavy disk I/O that sync sometimes causes. * Under Linux, bdflush is started by /sbin/upda te. There is usually no reason to worry about it, but if bdflush happens to die for some reason, the kernel will warn about this, and you should start i t by hand (/sbin/update). 6.3.3.7). Direct Memory Access or DMA * Direct memory access or DMA is the generic te rm used to refer to a transfer protocol where a peripheral device transfe rs information directly to or from memory, without the system processor being req uired to perform the transaction. * Enabling DMA has high permformance benefits o n the system processor. * Today DMA is the only feasible way to transfe r data from the hard drive to memory as most of todays operating systems use mult itasking and can better use the CPU for other tasks. * To enable dma, edit /etc/sysconfig/harddisks and uncomment USE_DMA=1. Setting this option will enable DMA on your hard di sk. * Another option to enable DMA is using the com mandline hdparm $ hdparm -d1 /dev/hda -------- to enable dma $ hdparm –d0 /dev/hda --------- to disable dma * To check if DMA is enabled, use the commandli ne below and it will say whether dma is set to on or off. $ hdparm /dev/hda

Page 178: Linux-Training-Volume1

* hdparm is used to get and set harddrive param eters such as DMA modes, xfer settings and various other settings that can help i mprove the speed of your hard disks and cdroms. * hdparm provides a command line interface to v arious hard disk ioctls supported by the stock Linux ATA/IDE device driver subsystem. These settings are not enabled by default so you will probably want to enable them. * To get more info about your hda hard drive, u se the option $ hdparm –i /dev/hda A good reference url : http://www.yolinux.com/TUTORIALS/LinuxTutorialOptim ization.html 6.3.3.8). Resource Monitoring Tools 1. free The free command displays system memory utilization . Here is an example of its output: $ free total used free shared buffers cached Mem: 255508 240268 15240 0 7592 86188 -/+ buffers/cache: 146488 109020 Swap: 530136 26268 503868 To get a continuous ouput of the free command , you may use $ watch -n 1 -d free The –n option will control the delay between upda tes and ‘-d’ will highlight any changes between updates. 2. top

Page 179: Linux-Training-Volume1

While free displays only memory-related information , the top command does a little bit of everything. CPU utilization, process statistics, memory utilization — top does it all. $ top $ top –c 3. vmstat Using this resource monitor, it is possible to get an overview of process, memory, swap, I/O, system, and CPU activity in one line of numbers: $ vmstat procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 1 0 0 0 524684 155252 338068 0 0 1 6 111 114 10 3 87 The process-related fields are: * r — The number of runnable processes waitin g for access to the CPU * b — The number of processes in an uninterru ptible sleep state * w — The number of processes swapped out, bu t runnable. The memory-related fields are: * swpd — The amount of virtual memory used * free — The amount of free memory * buff — The amount of memory used for buffer s * cache — The amount of memory used as page c ache.

Page 180: Linux-Training-Volume1

The swap-related fields are: * si — The amount of memory swapped in from d isk * so — The amount of memory swapped out to di sk The I/O-related fields are: * bi — Blocks sent to a block device * bo— Blocks received from a block device The system-related fields are: * in — The number of interrupts per second * cs — The number of context switches per sec ond The CPU-related fields are: * us — The percentage of the time the CPU ran user-level code * sy — The percentage of the time the CPU ran system-level code * id — The percentage of the time the CPU was idle 4. ulimit * Ulimit control the resources available to a p rocess started by the shell, on systems that allow such control by the kernel. *

Page 181: Linux-Training-Volume1

To improve performance, we can safely set the limit of processes for the super-user root to be unlimited. * All processes which will be started from the shell (bash in many cases), will have the same resource limits. * The command "ulimit -a" reports the current l imits set for the various parameters. $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) 4 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 7168 virtual memory (kbytes, -v) unlimited * The options available with ulimit are given b elow: -a All current limits are reported. -c The maximum size of core files created. -d The maximum size of a process's data segment. -f The maximum size of files created by the shell. -H Change and report the hard limit associated with a resource. -l The maximum size that may be locked into memory. -m The maximum resident set size.

Page 182: Linux-Training-Volume1

-n The maximum number of open file descriptors. -p The pipe buffer size. -s The maximum stack size. -S Change and report the soft limit associated with a resource. -t The maximum amount of cpu time in seconds. -u The maximum number of processes available to a s ingle user. -v The maximum amount of virtual memory available t o the process * To increase the ulimit value for the maximum no of open file descriptors on the system to 2048 for the root account, use the commandline below from the root shell. $ ulimit –n 2048 * To increase the maximum no of processes avail able to the root user to unlimited , use the commandline below $ ulimit –u unlimited 6.3.4. Disk Management Tools 6.3.4.1). Listing a Disk's Free Space * To see how much free space is left on a disk, use df. Without any options, df outputs a list of all mounted filesystems. * Six columns are output, displaying informatio n about each disk: the name of its device file in `/dev'; the number of 1024-by te blocks the system uses; the number of blocks in use; the number of blocks a vailable; the percent of the device used; and the name of the directory tree the device is mounted on. $ df Filesystem 1024-blocks Used Available Capacity Mounted on /dev/hda1 195167 43405 141684 23% / /dev/hda2 2783807 688916 1950949 26% /usr /dev/hdb1 2039559 1675652 258472 87% /home/carm a

Page 183: Linux-Training-Volume1

* The ‘-h’ option will display in human rea dable format .eg: size in Kb, Mb etc. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/hda2 37G 12G 23G 34% / /dev/hda1 99M 18M 77M 19% /boot /usr/tmpDSK 243M 4.1M 226M 2% /tmp 6.3.4.2). Listing a File's Disk Usage Use du to list the amount of space on disk used by files. To specify a particular file name or directory tree, give it as an argument. With no arguments, du works on the current directory. $ du $ du –h /usr $ du –h –max-depth=1 : will print the total dis k space used by sub-directories to just one level down the directory st ructure. $ du –sh : Calculates the total file space usage for a given directory 6.3.4.3). Partitioning a Hard Drive ‘fdisk’ is the partition table manipulator for Linux and is a menu driven program for creation and manipulation of partition tables. It even understands DOS type partition tables. Creating Partitions using ‘fdisk’ You may use fdisk to partition /dev/hdb using the s teps given below: $ fdisk /dev/hdb Command (m for help): m (Enter the letter "m" t o get list of commands)

Page 184: Linux-Training-Volume1

Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n (To add a new partition ) Command action e extended p primary partition (1-4) Partition number (1-4): 1 First cylinder (1-2654, default 1): 1 Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-2 654, default 2654): Using default value 2654

Page 185: Linux-Training-Volume1

Command (m for help): p Disk /dev/hdb: 240 heads, 63 sectors, 2654 cyli nders Units = cylinders of 15120 * 512 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 2654 20064208+ 5 Extended Command (m for help): w (Write and save partiti on table) Other options with fdisk * List the current partition table $ fdisk –l * Delete a partition.Give fdisk and then choose the ‘d’ option. $ fdisk /dev/hda and d and give the partition no: t o be deleted. * $ sfdisk and cfdisk commands also do the same task as fdisk. 6.3.5. File System Management 6.3.5.1). Creating a filesystem The mkfs is used to create a Linux filesystem on a device. The exit code returned by mkfs is 0 on success and 1 on failure.I t can also be used for checking bad blocks before building the file system . $ mkfs -t ext3 /dev/<drive> * There are also some related commands that can be used with mkfs.

Page 186: Linux-Training-Volume1

Examples of mkfs commands are: FileSystem Command EXT2 FS mkfs.ext2 , mke2fs EXT3 FS mkfs.ext3 Minix FS mkfs.minix DOS (FAT) FS mkfs.msdos , mkdosfs Virtual FAT FS mkfs.vfat XFS mkfs.xfs * mkfs.ext2 , mke2fs will make an ext2 type fil e system. $ mkfs.ext2 /dev/hda1 $ mkfs -t ext3 /dev/hda1 6.3.5.2). Mounting/Unmounting File Systems, fstab & mtab Viewing the currently mounted file systems * The command ‘mount’ displays all mounted devices, their mountpoint, filesystem, and access. $ mount * cat /proc/mounts will show all mounted filesy stems currently in use. $ cat /proc/mounts * cat /proc/filesystems will display all filesy stems currently in use. $ cat /proc/filesystems Mounting File Systems

Page 187: Linux-Training-Volume1

* On Linux systems, disks are used by mounting them to a directory, which makes the contents of the the disk available at tha t given directory mount point. * Disks can be mounted on any directory on the system, but any divisions between disks are transparent -- so a system which has, aside from the root filesystem disk mounted on `/', separate physical p artitions for the `/home', `/usr', and `/usr/local' directory trees will look and feel no different from the system that only has one physical partition. * The mount command is used to mount a file sys tem on a partition. The syntax for it is given below. $ mount -t ext3 /dev/hdb1 /home2 You need to make sure that you have first created t he mount point. For eg: in our above example when you are mounting /home2 on / dev/hdb1, you have to first create the directory /home2. * To mount a cdrom or floppy, you may use the s yntax below. $ mount /mnt/cdrom $ mount /mnt/floppy $ mount –a : command causes all file systems ment ioned in /etc/fstab to be mounted as indicated, except for those whose line c ontains the noauto keyword The fstab and mtab files * fstab is a configuration file that contains i nformation of all the partitions and storage devices in your computer. Th e file is located under /etc, so the full path to this file is /etc/fstab. * /etc/fstab contains information of where your partitions and storage devices should be mounted and how.This file is used by the boot process to mount the file systems on your linux machine. * So, you can usually fix your mounting problem s by editing your fstab file. /etc/fstab is just a plain text file, so you can op en and edit it with any text editor you're familiar with.

Page 188: Linux-Training-Volume1

Overview of the file A sample /etc/fstab file is given below: /dev/hda2 / ext2 defaults 1 1 /dev/hdb1 /home ext2 defaults 1 2 /dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0 proc /proc proc defaults 0 0 /dev/hda1 swap swap pri=42 0 0 * You can note that every line (or row) contain s the information of one device or partition * The 1st and 2nd columns give the device and i ts default mount point. * The line ‘/dev/hda2 / ext2 defaults 1 1’ mean that /dev/hda2 will be mounted to /. * The third column in /etc/fstab specifies the filesystem type of the device or partition. Like Ext3, ReiserFS is a journaled fi lesystem, but it's much more advanced than Ext3. Many Linux distros (including S uSE) have started using ReiserFS as their default filesystem for Linux part itions. * The option "auto" simply means that the files ystem type is detected automatically. * The fourth column in fstab lists all the moun t options for the device or partition. o auto and noauto : With the auto option, the device will be mounted automatically . auto is the default option. If you don't want the device to be mounted automatically, use the noauto option in /et c/fstab. With noauto, the device can be mounted only explicitly. o user and nouser : The user option allow s normal users to mount the device, whereas nouser lets only the root to mount the device. nouser is the default.

Page 189: Linux-Training-Volume1

o exec and noexec: exec lets you execute binaries that are on that partition, whereas noexec doesn't let you do that.e xec is the default option, which is a good thing. o ro : Mount the filesystem read-only. o rw : Mount the filesystem read-write o sync and async : How the input and outp ut to the filesystem should be done. sync means it's done synchronously. Howeve r, if you have the async option in /etc/fstab, input and output is done asyn chronously. async is the default. o noquota : Do not set user quotas on thi s partition. o nosuid : Do not set SUID/SGID access on this partition. o nodev : Do not set character or special devices access on this partition. o defaults : Uses the default options tha t are rw, suid, dev, exec, auto, nouser, and async. * The 5th column in /etc/fstab is the dump opti on. Dump checks it and uses the number to decide if a filesystem should be back ed up. If it's zero, dump will ignore that filesystem. If you take a look at the example fstab, you'll notice that the 5th column is zero in most cases. * The 6th column is a fsck option. fsck looks a t the number in the 6th column to determine in which order the filesystems should be checked. If it's zero, fsck won't check the filesystem. The /etc/mtab file The mtab file tracks mounted filesystems and theref ore its contents change from time to time . A Sample /etc/mtab file is given below. $ cat /etc/mtab /dev/hda3 / ext3 rw 0 0

Page 190: Linux-Training-Volume1

none /proc proc rw 0 0 none /dev/pts devpts rw,gid=5,mode=620 0 0 /dev/hda2 /boot ext3 rw 0 0 none /dev/shm tmpfs rw 0 0 /dev/hda6 /windows vfat rw 0 0 /dev/hdc1 /backup ext3 rw 0 0 Unmounting file systems The umount command detaches the file system(s) ment ioned from the file system hierarchy. A file system can be specified by giving the directory where it has been mounted. * To unmount the floppy that is mounted on `/fl oppy', type: $ umount /floppy * To unmount the disc in the CD-ROM drive mount ed on `/cdrom', type: $ umount /cdrom * To unmount /home2 mounted on /dev/hdb1 , you may give $ umount /home2 or $ umount /dev/hdb1 6.3.5.3). Checking File System Integrity A filesystem's correctness and validity can be chec ked using the fsck command. It can be instructed to repair any minor problems i t finds, and to alert the user if there any unrepairable problems.

Page 191: Linux-Training-Volume1

* Most systems are setup to run fsck automatica lly at boot time, so that any errors are detected (and hopefully corrected) befor e the system is used. * The automatic checking only works for the fil esystems that are mounted automatically at boot time. * fsck must only be run on unmounted filesystem s, never on mounted filesystems. This is because it accesses the raw di sk, and can therefore modify the filesystem without the operating system realizi ng it. Running fsck * To run fsck on /dev/hda1 , use the command li ne below. $ fsck /dev/hda1 $ fsck -t type device Eg: $ fsck -t ext2 /dev/hda3 * To check a Linux second extended file system as well as ext3, you may use fsck.e2fs or e2fsck. $ e2fsck -t ext2 /dev/hda3 $ e2fsck –f –t ext2 /dev/hda3 : Force checking even if the filesystem seems clean. * To automatically repair the file system witho ut asking any options, give $ e2fsck –p /dev/hda1 * E2fsck with the –c option will run the badb locks program to find any blocks which are bad on the filesystem, and then ma rks them as bad by adding them to the bad block inode. $ e2fsck –c /dev/hda1 Other File System Check Commands

Page 192: Linux-Training-Volume1

* badblocks : is used to check a filesystem for bad blocks. You can call it to scan for bad blocks and write a log of bad secto rs by using the -o output-file option. When called from e2fsck by using the - c option, the bad blocks that are found will automatically be marked bad $ badblocks /dev/hda1 1440 > bad-blocks The ‘-l’ option is used to add the block number s listed in the file specified by filename to the list of bad blocks. Th e format of this file is the same as the one generated by the badblocks program. $ fsck -t ext2 -l bad-blocks /dev/hda1 * tune2fs : is used to “tune†� a filesystem. This is mostly used to set filesystem check options, such as the maximum mount count and the time between filesystem checks. The mount count is used to 'stag ger' the mount counts of the different filesystems, which ensures that at reboot not all filesystems will be checked at the same time. $ tune2fs –l /dev/hda1 : will list the contents o f the filesystem super block * dumpe2fs : prints the super block and blocks group information for the filesystem present on device. $ dumpe2fs /dev/hda1 * stat : display information about the file or file system status like the inode no, blocks, type of file etc. $ stat /root/testfile 6.3.6. Disk Quota Management * In addition to monitoring the disk space used on a system, disk space can be restricted by implementing disk quotas so that t he system administrator is alerted before a user consumes too much disk space or a partition becomes full. *

Page 193: Linux-Training-Volume1

Disk quotas can be configured for individual users as well as user groups. * In addition, quotas can be set not just to co ntrol the number of disk blocks consumed but to control the number of inodes . Because inodes are used to contain file-related information, this allows contr ol over the number of files that can be created. * The quota RPM must be installed to implement disk quotas.The default Linux Kernel which comes with Redhat and Fedora Core come s with quota support compiled in. 6.3.6.1). Configuring and Implementing Disk Quotas on Partitions To implement disk quotas, use the following steps: 1. Enable quotas per file syst em by modifying /etc/fstab 2. Remount the file system(s) 3. Create the quota files and generate the disk usage table 4. Assign quotas 1. Enabling Quotas * Add the usrquota and/or grpquota options to t he file systems that require quotas inside the /etc/fstab file. * In the /etc/fstab entries below, only the /ho me file system has user and group quotas enabled. LABEL=/ / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2

Page 194: Linux-Training-Volume1

none /dev/pts devpts gid=5,mode=620 0 0 LABEL=/home /home ext3 defaults,usrquota,grpquo ta 1 2 none /proc proc defaults 0 0 none /dev/shm tmpfs defaults 0 0 /dev/hda2 swap swap defaults 0 0 2. Remounting the File Systems * After adding the userquota and grpquota optio ns, remount each file system whose fstab entry has been modified. $ umount /home $ mount –a * If the file system is not in use by any proce ss, use the umount command followed by the mount to remount the file system. * If the file system is currently in use, the e asiest method for remounting the file system is to reboot the system. 3. Creating Quota Files * After each quota-enabled file system is remou nted, the system is now capable of working with disk quotas. * However, the file system itself is not yet re ady to support quotas. The next step is to run the quotacheck command. * The quotacheck command examines quota-enabled file systems and builds a table of the current disk usage per file system. * The table is then used to update the operatin g system's copy of disk usage. In addition, the file system's disk quota fi les are updated. * To create the quota files (aquota.user and aq uota.group) on the file system, use the -c option of the quotacheck command . *

Page 195: Linux-Training-Volume1

For example, if user and group quotas are ena bled for the /home partition, create the quota files in the /home directory: $ quotacheck -cug /home o a — Check all quota-enabled, locally- mounted file systems in /etc/mtab. o c –- Create Quota files for each file system with quotas enabled. o u -- Check user disk quota o g -- Check group disk quota information o If neither the -u or -g options are spe cified, only the user quota file is created. If only -g is specified, only the group quota file is created. * After the files are created, run the followin g command to generate the table of current disk usage per file system with qu otas enabled: $ quotacheck –avug o v -- Display verbose status information as the quota check proceeds * After quotacheck has finished running, the qu ota files corresponding to the enabled quotas (user or group) are populated wi th data for each quota-enabled file system such as /home. 4. Assigning Quotas per User * The last step is assigning the disk quotas wi th the edquota command. To configure the quota for a user, as root in a shell prompt, execute the command: $ edquota username * For example, if a quota is enabled in /etc/fs tab for the /home partition (/dev/hda3) and the command edquota testuser is exe cuted, the following is shown in the editor configured as the default for the sys tem: Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard

Page 196: Linux-Training-Volume1

/dev/hda3 440436 0 0 37418 0 0 * The first column is the name of the file syst em that has a quota enabled for it. * The second column shows how many blocks the u ser is currently using. * The next two columns are used to set soft and hard block limits for the user on the file system. * The inodes column shows how many inodes the u ser is currently using. * The last two columns are used to set the soft and hard inode limits for the user on the file system. * A hard limit is the absolute maximum amount o f disk space that a user or group can use. Once this limit is reached, no furth er disk space can be used. * The soft limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace per iod. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months. * To verify or view the quota for the user whic h has been set, use the command: $ quota testuser 5. Assigning Quotas per Group * Quotas can also be assigned on a per-group ba sis. * For example, to set a group quota for the dev el group, use the command (the group must exist prior to setting the group qu ota): $ edquota -g devel 6. Assigning Quotas per File System *

Page 197: Linux-Training-Volume1

To assign quotas based on each file system en abled for quotas, use the command: $ edquota –t * Like the other edquota commands, this one ope ns the current quotas for the file system in the text editor: The block grace per iod or inode grace period can be changed here. Grace period before enforcing soft limits for u sers: Time units may be: days, hours, minutes, or sec onds Filesystem Block grace period Inode grace perio d /dev/hda3 7days 7days 6.3.6.2). Managing Disk Quotas 1. Reporting on Disk Quotas * Creating a disk usage report entails running the repquota utility. * For example, the command repquota /home produ ces this output: $ repquota /home *** Report for user quotas on device /dev/hda3 Block grace time: 7days; Inode grace time: 7day s Block limits File limits User used soft hard grace used soft hard grace ----------------------------------------------- ----------------------- root -- 36 0 0 4 0 0 tfox -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0

Page 198: Linux-Training-Volume1

* To view the disk usage report for all quota-e nabled file systems, use the command: $ repquota –a 2. Enabling and Disabling Quotas * It is possible to disable quotas without sett ing them to be 0. To turn all user and group quotas off, use the following comman d: $ quotaoff * To enable user and group quotas for all file systems: $ quotaon * To enable quotas for a specific file system, such as /home: $ quotaon -vug /home 6.3.7. RAID Setup This is what you need for any of the RAID levels: * Kernel support for RAID * The “raidtools†� package Some of the terms to be familiar with to understand the Raid configuration file /etc/raidtab is given below: 1. Chunk Size *

Page 199: Linux-Training-Volume1

You can never write completely parallel to a set of disks. If you have two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. * Instead, we choose some chunk-size, which we define as the smallest "atomic" mass of data that can be written to the de vices. * A write of 16 kB with a chunk size of 4 kB, w ill cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAI D-0 case with two disks. * Chunk sizes must be specified for all RAID le vels, including linear mode. However, the chunk-size does not make any differenc e for linear mode. * The argument to the chunk-size option in /etc /raidtab specifies the chunk size in kilobytes. So "4" means "4 kB". 2. Persistent Superblock * When an array is initialized with the persist ent-superblock option in the /etc/raidtab file, a special superblock is written in the beginning of all disks participating in the array. * This allows the kernel to read the configurat ion of RAID devices directly from the disks involved, instead of reading from so me configuration file that may not be available at all times. * This is essential if you want to boot from a raid. * The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. 6.3.7.1). Linear Raid Setup 1. Create two or more partitions which are not nece ssarily the same size, which you want to append to each other. 2. Setup the raid configuration file : Set up the / etc/raidtab file to describe your setup and for two disks - /dev/hda6 and /dev/h db5, it can look like this.

Page 200: Linux-Training-Volume1

raiddev /dev/md0 raid-level linear nr-raid-disks 2 chunk-size 32 persistent-superblock 1 device /dev/hda6 raid-disk 0 device /dev/hdb5 raid-disk 1 * To add another device to the RAID, increment the nr-raid-disks parameter and add another set of device and raid-disk paramet er. * The persistent-superblock option has to be sw itched on (set to 1) to enable the system to auto-detect the raid device af ter a reboot. * The chunk-size option is meaningless for a li near RAID configuration so this can have any value. 3. Initialize the Raid device : Now create the r aid device using the commandline below . This will initialize your array , write the persistent superblocks, and start the array. $ mkraid /dev/md0 4. To check the status of the new raid device , output the file /proc/mdstat. You should see that the array is running. $ cat /proc/mdstat Personalities : [linear] read_ahead 1024 sectors

Page 201: Linux-Training-Volume1

md0 : active linear hdb7[1] hda7[0] 47664640 blocks 32k rounding unused devices: <none> 5. Create a filesystem : A RAID device does not rely on having a particular type of filesystem. To create an ext3 filesystem on the new RAID device use the mkfs command: $ mkfs –t ext3 /dev/md0 6. Mount the RAID partition : Mount the RAID dev ice as follows: $ mount –t ext3 /raid /dev/md0 7. Add a new entry to /etc/fstab for the RAID de vice as follows so that it automatically gets mounted on reboot : /dev/md0 /raid ext3 defaults 1 2 8. When you have your RAID device running, you c an always stop it or re-start it using the comandlines below $ raidstop /dev/md0 or $ raidstart /dev/md0 6.3.7.2). RAID-0 Setup 1. Create two devices of approximately same size , so that you can combine their storage capacity and also combine their perfo rmance by accessing them in parallel. 2. Setup the Raid Configuration file - Set up th e /etc/raidtab file to describe the configuration. An example raidtab look s like below: raiddev /dev/md0

Page 202: Linux-Training-Volume1

raid-level 0 nr-raid-disks 2 chunk-size 4 persistent-superblock 1 device /dev/hda6 raid-disk 0 device /dev/hdb5 raid-disk 1 * RAID-0 has no redundancy, so when a disk dies , the array goes with it. Repeat steps 3 through 7 to initialize the raid dev ice and mount it. 6.3.7.3). RAID-1 Setup 1. Create two devices of approximately same size , so that they can be mirrors of each other. 2. Setup the Raid Configuration file - Set up th e /etc/raidtab file to describe the configuration. An example raidtab look s like below: raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/hda6 raid-disk 0 device /dev/hdb5 raid-disk 1

Page 203: Linux-Training-Volume1

* If you have more devices, which you want to k eep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.Remember to set the nr-spare-disks entry corr espondingly. * If you have spare disks, you can add them to the end of the device specification like device /dev/hdc5 spare-disk 0 3. Now we're all set to start initializing the RAID . Repeat steps 3 through 7 to initialize the raid device and mount it. 6.3.7.4). RAID-5 Setup 1. Create two or more devices of approximately s ame size, so that they can be combined into a larger device, but still maintain a degree of redundancy for data safety. Eventually you have a number of device s to use as spare-disks, that will not take part in the array before another devi ce fails. 2. Setup the Raid Configuration file - Set up th e /etc/raidtab file to describe the Raid – 5 configuration. An example r aidtab looks like below: raiddev /dev/md0 raid-level 5 nr-raid-disks 4 nr-spare-disks 0 persistent-superblock 1 parity-algorithm left-symmetric chunk-size 32 device /dev/hda3 raid-disk 0 device /dev/hdb1 raid-disk 1

Page 204: Linux-Training-Volume1

device /dev/hdc1 raid-disk 2 device /dev/hdd1 raid-disk 3 3. Now we're all set to start initializing the R AID. Repeat steps 3 through 7 to initialize the raid device and mount it. 7. NETWORKING AND NETWORK SERVICES 7.1. Networking Overview 7.1.1. OSI Reference Model The OSI Reference model defines seven layers that d escribe how applications running upon network-aware devices may communicate with each other. The model is generic and applies to all network types, not just TCP/ IP, and all media types, not just Ethernet. OSI was a working group within t he ISO and thereby OSI model is sometimes referred to as ISO Model by some folks . OSI is a seven layer model where traditionally, lay er diagrams are drawn with Layer 1 at the bottom and Layer 7 at the top. Layer 1 of the 7 layer Model is the Physical Layer and defines the physical and electrical characteristics of the network. * The NIC cards in your PC and the interfaces o n your routers all run at this level since, eventually, they have to pass str ings of ones and zeros down the wire. Layer 2 is known as the Data Link Layer. It defines the access strategy for sharing the physical medium, including data link an d media access issues. Protocols such as PPP, SLIP and HDLC live here. *

Page 205: Linux-Training-Volume1

Devices which depend on this level includes b ridges and switches, which learn which segment's devices are on by learning th e MAC addresses of devices attached to various ports. * This is how bridges are eventually able to se gment off a large network, only forwarding packets between ports if two device s on separate segments need to communicate. * Switches quickly learn a topology map of the network, and can thus switch packets between communicating devices very quickly. It is for this reason that migrating a device between different switch ports c an cause the device to lose network connectivity for a while, until the switch, or bridge, re-ARPs. Layer 3 is the Network Layer, providing a means for communicating open systems to establish, maintain and terminate network connec tions. The IP protocol lives at this layer, and so do some routing protocols. * All the routers in your network are operating at this layer Layer 4 is the Transport Layer, and is where TCP li ves. The standard says that "The Transport Layer relieves the Session Layer [La yer 5] of the burden of ensuring data reliability and integrity". * It is for this reason that people are becomin g very excited about the new Layer 4 switching technology. Before these devices became available, only software operated at this layer. * Hopefully, you will now also understand why T CP/ IP is uttered in one breath. TCP over IP, since Layer 4 is above (over) Layer 3. * It is at this layer that, should a packet fai l to arrive (perhaps due to misrouting, or because it was dropped by a busy rou ter), it will be retransmitted, when the sending party fails to rece ive an acknowledgement from the device with which it is communicating. * The more powerful routing protocols also oper ate here. OSPF and BGP, for example, are implemented as protocols directly over IP. Layer 5 is the Session Layer. It provides for two c ommunicating presentation entities to exchange data with each other. *

Page 206: Linux-Training-Volume1

The Session Layer is very important in the E- commerce field since, once a user starts buying items and filling their "shoppin g basket" on a Web server, it is very important that they are not load-balanced a cross different servers in a server pool. * This is why, clever as Layer 4 switching is, these devices still operate software to look further up the layer model. They a re required to understand when a session is taking place, and not to interfer e with it. Layer 6 is the Presentation Layer. This is where ap plication data is either packed or unpacked, ready for use by the running ap plication. * Protocol conversions, encryption/ decryption and graphics expansion all takes place here. Layer 7 is the Application Layer. This is where you find your end-user and end-application protocols, such as telnet, ftp, and mai l (pop3 and smtp). 7.1.2. TCP/IP Networks * TCP/ IP stands for Transmission Control Proto col/ Internet Protocol. * TCP/IP traces its origin to a research projec t funded by the United States DARPA (Defense Advanced Research Projects Agency) i n 1969. This was an experimental network, the ARPANET, which was conver ted into an operational one in 1975, after it had proven to be a success. * When ARPANET finally grew into the Internet, the use of TCP/IP had spread to networks beyond the Internet itself. * In 1983, the new protocol suite TCP/IP was ad opted as a standard, and all hosts on the network were required to use it. * TCP/IP is the protocol used in remote logins, NFS etc. * Because TCP/IP is so widely supported, it is ideal for uniting different hardware and software, even if you don't communicat e over the Internet. * A globally unique addressing scheme allows an y TCP/IP device to address any other device in the entire network, even if the network is as large as the world-wide Internet. *

Page 207: Linux-Training-Volume1

TCP/IP attempts to create a heterogeneous net work with open protocols that are independent of operating system and architectur al difference. 7.1.2.1). Layers in the TCP/IP Protocol Architectur e For more info about the protocol architecture, refe r to the url below: http://www.citap.com/documents/tcp-ip/tcpip012.htm 7.1.3. LAN Network 7.1.3.1). Area Networks For historical reasons, the industry refers to near ly every type of network as an "area network." The most commonly-discussed cate gories of computer networks include the following – * Local Area Network (LAN) * Wide Area Network (WAN) * Metropolitan Area Network (MAN) * Storage Area Network (SAN) * System Area Network (SAN) * Server Area Network (SAN) * Small Area Network (SAN) * Personal Area Network (PAN) * Desk Area Network (DAN) * Controller Area Network (CAN) *

Page 208: Linux-Training-Volume1

Cluster Area Network (CAN) The concept of "area" made good sense at this time, because a key distinction between a LAN and a WAN involves the physical dista nce that the network spans. A third category, the MAN, also fit into this scheme as it too is centered on a distance-based concept. As technology improved, new types of networks appea red on the scene. These, too, became known as various types of "area networks" fo r consistency's sake, although distance no longer proved a useful differe ntiator. 7.1.3.2). LAN Basics * A LAN connects network devices over a relativ ely short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LAN s, and occasionally a LAN will span a group of nearby buildings. * In IP networking, one can conceive of a LAN a s a single IP subnet (though this is not necessarily true in practice). * Besides operating in a limited space, LANs in clude several other distinctive features. LANs are typically owned, con trolled, and managed by a single person or organization. * They also use certain specific connectivity t echnologies, primarily Ethernet and Token Ring. Three most commonly used LAN Implementations Are : 7.1.3.3). LAN Protocols and the OSI Reference Model * LAN protocols function at the lowest two laye rs of the OSI reference model, between the physical layer and the data link layer. * Figure below illustrates how several popular LAN protocols map to the OSI reference model.

Page 209: Linux-Training-Volume1

7.1.3.4). LAN Media-Access Methods * Media contention occurs when two or more netw ork devices have data to send at the same time. * Because multiple devices cannot talk on the n etwork simultaneously, some type of method must be used to allow one device acc ess to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing. Carrier Sense Multiple Access/Collision Detection ( CSMA/CD) N/w * In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. * When a device has data to send, it first list ens to see if any other device is currently using the network. If not, it s tarts sending its data. * After finishing its transmission, it listens again to see if a collision occurred. * A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random lengt h of time before resending its data. In most cases, a collision will not occur again between the two devices then. * Because of this type of network contention, t he busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the number of devices on a single network increa ses. * For CSMA/CD networks, switches segment the ne twork into multiple collision domains. This reduces the number of devices per net work segment that must contend for the media. * By creating smaller collision domains, the pe rformance of a network can be increased significantly without requiring addressin g changes.

Page 210: Linux-Training-Volume1

Token Passing N/W * In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network f rom device to device. * When a device has data to send, it must wait until it has the token and then sends its data. * When the data transmission is complete, the t oken is released so that other devices may use the network media. * The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calcul ate the maximum time that will pass before a device has the opportunity to se nd data. * This explains the popularity of token-passing networks in some real-time environments such as factories, where machinery mus t be capable of communicating at a determinable interval. Full Duplex and Half Duplex * Normally CSMA/CD networks are half-duplex, me aning that while a device sends information, it cannot receive at the time. W hile that device is talking, it is incapable of also listening for other traffic . * This is much like a walkie-talkie. When one p erson wants to talk, he presses the transmit button and begins speaking. Wh ile he is talking, no one else on the same frequency can talk. * When the sending person is finished, he relea ses the transmit button and the frequency is available to others. * When switches are introduced, full-duplex ope ration is possible. Full-duplex works much like a telephone—you can listen as well as talk at the same time.

Page 211: Linux-Training-Volume1

* When a network device is attached directly to the port of a network switch, the two devices may be capable of operating in full-duplex mode. * However, full-duplex operation does increase the throughput of most applications because the network media is no longer shared. Two devices on a full-duplex connection can send data as soon as it is ready. * Token-passing networks such as Token Ring can also benefit from network switches. In large networks, the delay between turn s to transmit may be significant because the token is passed around a la rger network. 7.1.3.5). LAN Transmission Methods * LAN data transmissions fall into three classi fications: unicast, multicast, and broadcast. * In each type of transmission, a single packet is sent to one or more nodes. * In a unicast transmission, a single packet is sent from the source to a destination on a network. First, the source node ad dresses the packet by using the address of the destination node. The package is then sent onto the network, and finally, the network passes the packet to its d estination. * A multicast transmission consists of a single data packet that is copied and sent to a specific subset of nodes on the netwo rk. First, the source node addresses the packet by using a multicast address. The packet is then sent into the network, which makes copies of the packet and s ends a copy to each node that is part of the multicast address. * A broadcast transmission consists of a single data packet that is copied and sent to all nodes on the network. In these type s of transmissions, the source node addresses the packet by using the broad cast address. The packet is then sent on to the network, which makes copies of the packet and sends a copy to every node on the network.

Page 212: Linux-Training-Volume1

7.1.3.6). LAN Topologies * LAN topologies define the manner in which net work devices are organized. * Four common LAN topologies exist: bus, ring, star, and tree. * These topologies are logical architectures, b ut the actual devices need not be physically organized in these configurations . * Logical bus and ring topologies, for example, are commonly organized physically as a star. * A bus topology is a linear LAN architecture i n which transmissions from network stations propagate the length of the medium and are received by all other stations. o Of the three most widely used LAN imple mentations, Ethernet/IEEE 802.3 networks—including 100BaseT—implement a b us topology, which is illustrated. * A ring topology is a LAN architecture that co nsists of a series of devices connected to one another by unidirectional transmis sion links to form a single closed loop. o Both Token Ring/IEEE 802.5 and FDDI net works implement a ring topology. Figure depicts a logical ring topology. * A star topology is a LAN architecture in whic h the endpoints on a network are connected to a common central hub, or switch, b y dedicated links. Logical bus and ring topologies are often implemented physi cally in a star topology. A star topology which is illustrated in figure. * A tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case. Figure illustrates a logical tree topology.

Page 213: Linux-Training-Volume1

7.1.3.7). LAN Devices * Devices commonly used in LANs include repeate rs, hubs, LAN extenders, bridges, LAN switches, and routers. Repeater * A repeater is a physical layer device used to interconnect the media segments of an extended network. * A repeater essentially enables a series of ca ble segments to be treated as a single cable. * Repeaters receive signals from one network se gment and amplify, retime, and retransmit those signals to another network seg ment. * These actions prevent signal deterioration ca used by long cable lengths and large numbers of connected devices. * Repeaters are incapable of performing complex filtering and other traffic processing. * In addition, all electrical signals, includin g electrical disturbances and other errors, are repeated and amplified. The total number of repeaters and network segments that can be connected is limited d ue to timing and other issues. Figure 2-6 illustrates a repeater connectin g two network segments. Hub * A hub is a physical layer device that connect s multiple user stations, each via a dedicated cable. A typical hub is a mult i-port repeater. * Electrical interconnections are established i nside the hub. *

Page 214: Linux-Training-Volume1

Hubs are used to create a physical star netwo rk while maintaining the logical bus or ring configuration of the LAN. In so me respects, a hub functions as a multiport repeater. * Hubs and repeaters work at the first layer of the OSI model, also known as the Physical layer. Bridges * Bridges are introduced as devices which conne ct LANs at the MAC layer. * The purpose of bridges is to allow hosts atta ched to different LANs to communicate as if they were located on the same LAN . * In contrast to repeaters/hubs , that act at t he physical layer and allow all traffic to cross LAN segments, bridges are more intelligent and limit the traffic to the section of the network on which it i s relevant. * Brides posses work at the second layer of the OSI model, known as the data-link layer. * Since a bridge examines the packet to record the sender and lookup the recipient, there is overhead in sending a packet th rough a bridge. Switches * This is a device with multiple ports which fo rwards packets from one port to another. A switch is essentially a multi-port br idge. * The behavior of a switch is exactly the same as a bridge, record sender port, look up the recipient, and forward based on t he recipient’s port. * The difference is that most switches implemen t these functions in hardware using a dedicated processor. This makes them much f aster than traditional software based bridges. Router *

Page 215: Linux-Training-Volume1

The basic function of the router is to route the traffic from one network to another network efficiently. It provide intellig ent redundancy and security required to select the optimum path. Usually router s are used for connecting remote networks. * A router works at the next layer, layer 3 (Ne twork) of the OSI model. * The router uses network addresses (IP Address es) to determine how to forward a packet. * Routers also offer more advanced filtering op tions, along with features designed to improve redundancy. LAN Extender * A LAN extender is a remote-access multilayer switch that connects to a host router. * LAN extenders forward traffic from all the st andard network layer protocols and filter traffic based on the MAC addre ss or network layer protocol type. * LAN extenders scale well because the host rou ter filters out unwanted broadcasts and multicasts. However, LAN extenders a re not capable of segmenting traffic or creating security firewalls. * Figure illustrates multiple LAN extenders con nected to the host router through a WAN. 7.1.4. WAN Basics * As the term implies, a wide-area network span s a large physical distance. A WAN like the Internet spans most of the world! * A WAN is a geographically-dispered collection of LANs. * A network device called a router connects LAN s to a WAN. In IP networking, the router maintains both a LAN address and a WAN a ddress. * WANs differ from LANs in several important wa ys.

Page 216: Linux-Training-Volume1

* Like the Internet, most WANs are not owned by any one organization but rather exist under collective or distributed owners hip and management. * WANs use technology like ATM, Frame Relay and X.25 for connectivity. * WAN technologies generally function at the lo wer three layers of the OSI reference model: the physical layer, the data link layer, and the network layer. 7.1.4.1). WAN Networks Point-to-Point Links * A point-to-point link provides a single, pre- established WAN communications path from the customer premises thro ugh a carrier network, such as a telephone company, to a remote network. * Point-to-point lines are usually leased from a carrier and thus are often called leased lines. * For a point-to-point line, the carrier alloca tes pairs of wire and facility hardware to your line only. These circuits are generally priced based on bandwidth required and distance between the two connected points. * Point-to-point links are generally more expen sive than shared services such as Frame Relay. Circuit Switching * Switched circuits allow data connections that can be initiated when needed and terminated when communication is complete. * This works much like a normal telephone line works for voice communication. * Integrated Services Digital Network (ISDN) is a good example of circuit switching.

Page 217: Linux-Training-Volume1

* When a router has data for a remote site, the switched circuit is initiated with the circuit number of the remote net work. In the case of ISDN circuits, the device actually places a call to the telephone number of the remote ISDN circuit. When the two networks are conn ected and authenticated, they can transfer data. When the data transmission is co mplete, the call can be terminated. * A Circuit-Switched WAN Undergoes a Process Si milar to That Used for a Telephone Call as can be seen below: Packet Switching * Packet switching is a WAN technology in which users share common carrier resources. * Because this allows the carrier to make more efficient use of its infrastructure, the cost to the customer is general ly much better than with point-to-point lines. * In a packet switching setup, networks have co nnections into the carrier's network, and many customers share the carrier's net work. * The carrier can then create virtual circuits between customers' sites by which packets of data are delivered from one to the other through the network. The section of the carrier's network that is shared is often referred to as a cloud. * Some examples of packet-switching networks in clude Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit Data Services (SMDS), and X.25. * Figure hows an example packet-switched circui t. The virtual connections between customer sites are often referred to as a v irtual circuit. Packet Switching Transfers Packets Across a Carrier Network

Page 218: Linux-Training-Volume1

7.1.4.2). WAN Virtual Circuits * A virtual circuit is a logical circuit create d within a shared network between two network devices. * Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). Switched Virtual Circuits * SVCs are virtual circuits that are dynamicall y established on demand and terminated when transmission is complete. * Communication over an SVC consists of three p hases: circuit establishment, data transfer, and circuit termination. * The establishment phase involves creating the virtual circuit between the source and destination devices. * Data transfer involves transmitting data betw een the devices over the virtual circuit. * The circuit termination phase involves tearin g down the virtual circuit between the source and destination devices. * SVCs are used in situations in which data tra nsmission between devices is sporadic. Permanent Virtual Circuits * PVC is a permanently established virtual circ uit that consists of one mode: data transfer. * PVCs are used in situations in which data tra nsfer between devices is constant. *

Page 219: Linux-Training-Volume1

PVCs decrease the bandwidth use associated wi th the establishment and termination of virtual circuits, but they increase costs due to constant virtual circuit availability. * PVCs are generally configured by the service provider when an order is placed for service. Internet Service Providers * Home networkers with cable modem or DSL servi ce already have encountered LANs and WANs in practice, though they may not have noticed. * A cable/DSL router join the home LAN to the W AN link maintained by one's ISP. * The ISP provides a WAN IP address used by the router, and all of the computers on the home network use private LAN addre sses. * On a home network, like many LANs, all comput ers can communicate directly with each other, but they must go through a central gateway location to reach devices outside of their local area. 7.1.4.3). WAN Devices * WANs use numerous types of devices that are s pecific to WAN environments. * WAN switches, access servers, modems, CSU/DSU s, and ISDN terminal adapters are discussed in the following sections. * Other devices found in WAN environments that are used in WAN implementations include routers, ATM switches, and multiplexers. Access Server * An access server acts as a concentration poin t for dial-in and dial-out connections. Figure illustrates an access server co ncentrating dial-out connections into a WAN.

Page 220: Linux-Training-Volume1

CSU/DSU * A channel service unit/digital service unit ( CSU/DSU) is a digital-interface device used to connect a router to a digi tal circuit like a T1. * The CSU/DSU also provides signal timing for c ommunication between these devices. * Figure below illustrates the placement of the CSU/DSU in a WAN implementation. ISDN Terminal Adapter * An ISDN terminal adapter is a device used to connect ISDN Basic Rate Interface (BRI) connections to other interfaces, su ch as EIA/TIA-232 on a router. * A terminal adapter is essentially an ISDN mod em, although it is called a terminal adapter because it does not actually conve rt analog to digital signals. * Figure below illustrates the placement of the terminal adapter in an ISDN environment. WAN Switch * A WAN switch is a multiport internetworking d evice used in carrier networks. * These devices typically switch such traffic a s Frame Relay, X.25, and SMDS, and operate at the data link layer of the OSI reference model. * Figure below illustrates two routers at remot e ends of a WAN that are connected by WAN switches.

Page 221: Linux-Training-Volume1

Modem * A modem is a device that interprets digital a nd analog signals, enabling data to be transmitted over voice-grade telephone l ines. * At the source, digital signals are converted to a form suitable for transmission over analog communication facilities. * At the destination, these analog signals are returned to their digital form. * Figure below shows a simple modem-to-modem co nnection through a WAN. 7.1.4.4). Other Area Networks After LANs and WANs, one will most commonly encount er the following three network designs: * A Metropolitan Area Network connects an area larger than a LAN but smaller than a WAN, such as a city, with dedicated or high- performance hardware. * A Storage Area Network connects servers to da ta storage devices through a technology like Fibre Channel. * A System Area Network connects high-performan ce computers with high-speed connections in a cluster configuration. 7.1.5. Ethernet and Networking Hardware Ethernet is a frame-based computer networking techn ology for local area networks (LANs). * It defines wiring and signaling for the physi cal layer, and frame formats and protocols for the media access control (MAC)/da ta link layer of the OSI model. * The most commonly installed Ethernet systems are called 10BASE-T and provide transmission speeds up to 10 Mbps.

Page 222: Linux-Training-Volume1

* Ethernet is mostly standardized as IEEE's 802 .3. It has become the most widespread LAN technology in use * Ethernet follows a simple set of rules that g overn its basic operation. * To better understand these rules, it is impor tant to understand the basics of Ethernet terminology. o Medium - Ethernet devices attach to a c ommon medium that provides a path along which the electronic signals will travel . Historically, this medium has been coaxial copper cable, but today it is more commonly a twisted pair or fiber optic cabling. o Segment - We refer to a single shared m edium as an Ethernet segment. o Node - Devices that attach to that segm ent are stations or nodes. o Frame - The nodes communicate in short messages called frames, which are variably sized chunks of information. * One interesting thing about Ethernet addressi ng is the implementation of a broadcast address. A frame with a destination addre ss equal to the broadcast address is intended for every node on the network, and every node will both receive and process this type of frame. 7.1.5.1). Ethernet Network Medium * A Network Medium is the type of cabling used in a network. * There are many types of cables used in networ ks today, although only a few are commonly used. * The type of cabling can have an influence on the speed of the network. 1. Twisted-pair Cable

Page 223: Linux-Training-Volume1

* A Twisted-pair cable has a pair of wires twis ted around each other to reduce the interference. * There can be two, four, or even more sets of twisted pairs in a network cable. * Twisted-pair cables are usually attached to t he network devices with a jack that looks like a telephone modular jack, but a little wider, supporting up to eight wires. * There are two types of Twisted-Pair cable in use: o A Unshielded Twisted-Pair (UTP) cable i s one of the most commonly used network media because it is cheap and easy to work with. o A Shielded Twisted-Pair (STP) cable has the same basic construction as its unshielded cousin, but the entire cable is w rapped in a layer of insulation for protection from interference. 2. Coaxial Cable * A Coaxial cable is designed with two conducto rs, one in the centre surrounded by a layer of insulation, and the second a mesh or foil conductor surrounding the insulation. * Outside the mesh is a layer of outer insulati on. Because of its reduced electrical impedance, coaxial is capable of faster transmission than twisted-pair cable. * Coax is also broadband, supporting several ne twork channels on the same cable. * There are two types of coaxial cable in use: o Thick coax is a heavy cable that is use d as a network backbone for the bus network. This cable is formally known as Et hernet PVC coax, but is usually called 10BASE5. Because thick coax is so he avy and stiff, it is difficult to work with and is quit expensive. o Thin coax is the most common type used in Ethernet networks. It goes by several names, including Thin Ethernet, 10BASE2, and cheapernet. Thin coax is the same as your television cable. Thin coax is qui te flexible and has a low impedance, so it is capable of fast throughput rate s. It is not difficult to lay

Page 224: Linux-Training-Volume1

out, as it is quite flexible, and it is easy to con struct cables with the proper connectors, usually BNC connectors, at each end. Th in coax is broadband, although most local area networks use only a single channel of the cable. 3. Fibre-optic Cable * A Fibre-optic cable called FDDI (Fiber Distri buted Data Interface) is becoming popular for very high-speed networks (500 Mbits). It is very expensive but capable of supporting many channels at tremendo us speed. o Fibre-optic cable is almost never used in local area networks, although some large corporations do use it to conne ct many LAN’s together into a wide area network. o The supporting hardware to handle fibre -optic backbones is quite expensive and specialised. o It consists of a single cable with host s being attached to it through connectors, taps or transceiver. 7.1.5.2). Ethernet Network Interface * To hide the diversity of equipment that may b e used in a networking environment, TCP/IP defines an abstract interface t hrough which the hardware is accessed called the Ethernet interface or network i nterface. * This interface offers a set of operations whi ch is the same for all types of hardware and basically deals with sending and re ceiving packets. * For each peripheral device you want to use fo r networking, a corresponding interface has to be present in the kernel. * For example, Ethernet interfaces are called e th0 and eth1 and these interface names are used for configuration purposes when you want to name a particular physical device to the kernel. 7.1.6. Internet Protocol or IP Address * To extend your network beyond the Ethernet, r egardless of the hardware you run or the sub-units its made up of, you have the I nternet Protocol which

Page 225: Linux-Training-Volume1

facilitates this. The current version of Internet P rotocol that is in use is IP Version 4 ("IPv4") which is now nearly twenty years old.. * Hence we have a dedicated host, a so-called g ateway, which handles incoming and outgoing packets by copying them betwe en any two Ethernets and the fiber optics cable. * This scheme of directing data to a remote hos t is called routing, and packets are often referred to as datagrams in this context. To facilitate things, datagram exchange is governed by a single p rotocol that is independent of the hardware used: IP, or Internet Protocol. * The main benefit of IP is that it turns physi cally dissimilar networks into one apparently homogeneous network. This is ca lled internetworking, and the resulting ``meta-network'' is called an internet. * IP also requires a hardware-independent addre ssing scheme. This is achieved by assigning each host a unique 32-bit num ber according to the current version of Internet Protocol ipv4, called the IP-ad dress. An IP-address is usually written as four 8-bit numbers called octets , separated by dots. This format is also called dotted quad notation. * To be usable for TCP/IP networking, an interf ace must be assigned an IP-address which serves as its identification when com municating with the rest of the world. 7.1.6.1). IP Address Notation and Classes of Networ ks * IP-addresses are split into a network number, which is contained in the leading octets, and a host number, which is the rem ainder. * When applying to the NIC for IP-addresses, yo u are not assigned an address for each single host you plan to use. Instead, you are given a network number, and are allowed to assign all valid IP-addresses wi thin this range to hosts on your network according to your preferences. * Depending on the size of the network, the hos t part may need to be smaller or larger. To accommodate different needs, there ar e several classes of networks, defining different splits of IP-addresses . Class A *

Page 226: Linux-Training-Volume1

Class A comprises networks 1.0.0.0 through 12 7.0.0.0. The network number is contained in the first octet. This provides for a 24 bit host part, allowing roughly 1.6 million hosts. Class B * Class B contains networks 128.0.0.0 through 1 91.255.0.0; the network number is in the first two octets. This allows for 16320 nets with 65024 hosts each. Class C * Class C networks range from 192.0.0.0 through 223.255.255.0, with the network number being contained in the first three o ctets. This allows for nearly 2 million networks with up to 254 hosts. Classes D, E, and F * Addresses fall into the range of 224.0.0.0 th rough 254.0.0.0 are either experimental, or are reserved for future use and do n't specify any network. For example, if the IP address of a host is 149.76. 12.4, it refers to host 12.4 on the class-B network 149.76.0.0. * You may have noticed that in the above list n ot all possible values were allowed for each octet in the host part. * This is because host numbers with octets all 0 or all 255 are reserved for special purposes. * An address where all host part bits are zero refers to the network, and one where all bits of the host part are 1 is called a broadcast address. This refers to all hosts on the specified network simult aneously. *

Page 227: Linux-Training-Volume1

Thus, 149.76.255.255 is not a valid host addr ess, but refers to all hosts on network 149.76.0.0. Reserved Network Addresses * There are also two network addresses that are reserved, 0.0.0.0 and 127.0.0.0. The first is called the default route, t he latter the loopback address. * Network 127.0.0.0: is reserved for IP traffic local to your host. Usually, address 127.0.0.1 will be assigned to a special int erface on your host, the so-called loopback interface, which acts like a closed circuit. Any IP packet handed to it from TCP or UDP will be returned to th em as if it had just arrived from some network. This allows you to develop and t est networking software without ever using a ``real'' network. Another usef ul application is when you want to use networking software on a standalone hos t. 7.1.7. Transmission Control Protocol * TCP, or Transmission Control Protocol builds a reliable service on top of IP. The essential property of TCP is that it uses I P to give you the illusion of a simple connection between the two processes on yo ur host and the remote machine, so that you don't have to care about how a nd along which route your data actually travels. * A TCP connection works essentially like a two -way pipe that both processes may write to and read from. * TCP identifies the end points of such a conne ction by the IP-addresses of the two hosts involved, and the number of a so-call ed port on each host. Ports may be viewed as attachment points for network conn ections. 7.1.8. User Datagram Protocol TCP isn't the only user protocol in TCP/IP networki ng. Although its suitable for more applications, the overhead involved is quite h igh.Hence, many applications use a sibling protocol of TCP called UDP, or User D atagram Protocol. *

Page 228: Linux-Training-Volume1

UDP also allows an application to contact a s ervice on a certain port on the remote machine, but it doesn't establish a conn ection for this. Instead, you may use it to send single packets to the destinatio n service. 7.1.9. Connection Ports * Ports may be viewed as attachment points for network connections. If an application wants to offer a certain service, it at taches itself to a port and waits for clients to connect to this port (this is also called listening on the port). * A client that wants to use this service alloc ates a port on its local host, and connects to the server's port on the remo te host. * It is worth noting that although both TCP and UDP connections rely on ports, these numbers do not conflict. This means th at TCP port 513, for example, is different from UDP port 513. In fact, these port s can serve as access points for two different services. * Some of the common ports you come across are port 80( used by httpd), 21( used by ftp), 22 ( used by sshd) etc. 7.1.10. Address Resolution Address Resolution refers to mapping IP-addresses o nto Ethernet addresses. This is the Address Resolution Protocol, or ARP. * When ARP wants to find out the Ethernet addre ss corresponding to a given IP-address, it uses a feature of Ethernet known as “broadcasting'' , where a datagram is addressed to all stations on the networ k simultaneously. * The broadcast datagram sent by ARP contains a query for the IP-address. Each receiving host compares this to its own IP-add ress, and if it matches, returns an ARP reply to the inquiring host. The inq uiring host can now extract the sender's Ethernet address from the reply. 7.1.11. IP Routing * When you write a letter to someone, you usual ly put a complete address on the envelope, specifying the country, state, zip co de, etc. After you put it

Page 229: Linux-Training-Volume1

into the letter box, the postal service will delive r it to its destination: it will be sent to the country indicated, whose nation al service will dispatch it to the proper state and region, etc. The advantage of this hierarchical scheme is rather obvious. * IP-networks are structured in a similar way. The whole Internet consists of a number of proper networks, called autonomous s ystems. * Each such system performs any routing between its member hosts internally, so that the task of delivering a datagram is reduce d to finding a path to the destination host's network. 7.1.11.1). Subnetworks * Ip addresses can be split into a host and net work part. By default, the destination network is derived from the network par t of the IP-address. Thus, hosts with identical IP-network numbers should be f ound within the same network. * IP allows you to subdivide an IP-network into several subnets or sub-networks. * It is worth noting that sub-netting (as the t echnique of generating subnets is called) is only an internal division of the network. Subnets are generated by the network owner (or the administrato rs) to reflect existing boundaries, be they physical (between two Ethernets )or administrative (between two departments). However, this structure affects o nly the network's internal behavior, and is completely invisible to the outsid e world. How sub-netting is done? In sub-netting, the network part is extended to inc lude some bits from the host part. The number of bits that are interpreted as th e subnet number is given by the so-called subnet mask, or netmask. This is a 32 -bit number, too, which specifies the bit mask for the network part of the IP-address. For example: A sample network has a class-B network number of 14 9.76.0.0, and its netmask is therefore 255.255.0.0. * Internally, this network consists of several smaller networks, such as the LANs of various departments. So the range of IP-add resses is broken up into 254 subnets, 149.76.1.0 through 149.76.254.0. *

Page 230: Linux-Training-Volume1

For example, the Department1 has been assigne d 149.76.12.0. The Department2 is given a network by its own right, an d is given 149.76.1.0. * These subnets share the same IP-network numbe r, while the third octet is used to distinguish between them. Thus they will us e a subnet mask of 255.255.255.0. 7.1.11.2). Gateways * A gateway is a host that is connected to two or more physical networks simultaneously and is configured to switch packets between them. * A gateway is assigned one IP-address per netw ork it is on. These addresses--- along with the corresponding netmask-- - are tied to the interface the subnet is accessed through. Thus, the mapping o f interfaces and addresses could look like this: +-------+-------------+----------------+ |iface | address | netmask | +-------+-------------+----------------+ +-------+-------------+----------------+ |eth0 | 149.76.4.1 | 255.255.255.0 | |fddi0 | 149.76.1.4 | 255.255.255.0 | |lo | 127.0.0.1 | 255.0.0.0 | +-------+-------------+----------------+ +-------+-------------+----------------+ * The last entry describes the loopback interfa ce lo. * Hosts that are on two subnets at the same tim e are shown with both addresses.

Page 231: Linux-Training-Volume1

7.1.11.3). Routing Table The routing table is used while delivering datagram s to IP address on a remote server which is maintained by the kernel. * The routing information IP uses for this is b asically a table linking networks to gateways that reach them. * A catch-all entry (the default route) must ge nerally be supplied, too; this is the gateway associated with network 0.0.0.0 . All packets to an unknown network are sent through the default route. * For larger networks, they are built and adjus ted at run-time by routing daemons; these run on central hosts of the network and exchange routing information to compute ``optimal'' routes between t he member networks. * Depending on the size of the network, differe nt routing protocols will be used. The most prominent one is RIP, the Routing In formation Protocol, which is implemented by the BSD routed daemon. * Dynamic routing based on RIP chooses the best route to some destination host or network based on the number of “hops'', t hat is, the gateways a datagram has to pass before reaching it. The shorte r a route is, the better RIP rates it. 7.2. Linux Network Administration 7.2.1. Network Configuration Files 1. Resolver configuration file -- /etc/resolv.conf This file specifies the IP addresses of DNS servers and the search domain. Unless configured to do otherwise, the network init ialization scripts populate this file. search name-of-domain.com - Name of your domain or ISP's domain if using their name server nameserver XXX.XXX.XXX.XXX - IP address of primary name server nameserver XXX.XXX.XXX.XXX - IP address of secondar y name server *

Page 232: Linux-Training-Volume1

This configures Linux so that it knows which DNS server will be resolving domain names into IP addresses. If using a static I P address, ask the ISP or check another machine on your network. 2. /etc/hosts - Locally resolve node/host names to IP addresses. The main purpose of this file is to resolve hostnam es that cannot be resolved any other way. It can also be used to resolve hostn ames on small networks with no DNS server. * Regardless of the type of network the compute r is on, this file should contain a line specifying the IP address of the loo pback device (127.0.0.1) as localhost.localdomain 127.0.0.1 localhost.localdomain localhost XXX.XXX.XXX.XXX hostname hostname1 192.168.0.2 srv1.carmatec.com * Note when adding hosts to this file, place th e fully qualified name first. 3. /etc/sysconfig/network : Red Hat network configu ration file used by the system during the boot process. Specifies routing a nd host information for all network interfaces. The following values may be used inside : * NETWORKING=<value>, where <value> is one of t he following boolean values: yes — Networking should be configured. no — Networking should not be configured. * HOSTNAME=<value>, where <value> should be the Fully Qualified Domain Name (FQDN), such as hostname.example.com, but can be wh atever hostname is necessary. * GATEWAY=<value>, where <value> is the IP addr ess of the network's gateway. * GATEWAYDEV=<value>, where <value> is the gate way device, such as eth0.

Page 233: Linux-Training-Volume1

* NISDOMAIN=<value>, where <value> is the NIS d omain name. 4. /etc/nsswitch.conf - System Databases and Name S ervice Switch configuration file . The /etc/nsswitch.conf file is used to confi gure which services are to be used to determine information such as hostnames, pa ssword files, and group files. hosts: files dns nisplus nis * This example tells Linux to first resolve a h ost name by looking at the local hosts file(/etc/hosts), then if the name is n ot found look to your DNS server as defined by /etc/resolv.conf and if not fo und there look up to your NIS server. 5. /etc/sysconfig/network-scripts/ifcfg-<interface- name> For each network interface on a Red Hat Linux syste m, there is a corresponding interface configuration script. Each of these files provide information specific to a particular network interface. * /etc/sysconfig/network-scripts/ifcfg-eth0 is the interface config script for eth0 interface * Configuration settings for your first etherne t port (0). Your second port is eth1. 7.2.2. Network Administration Commands 7.2.2.1). IP Address Assignment The command ifconfig if used for this purpose. This command is used to configure network interfaces, or to display their current con figuration. In addition to activating and deactivating interfaces with the up and down settings, this command is necessary for setting an interface's add ress information. Determining your IP address Assignment *

Page 234: Linux-Training-Volume1

You can determine the IP address of a linux m achine and which device its assigned to using the ifconfig command. $ ifconfig Setting up the main IP * An IP interface, for example, needs to be tol d both its own address and the network mask and broadcast address of its subne t. * To configure the IP 192.168.10.12 on the inte rface eth0, you can use: $ ifconfig eth0 192.168.10.12 netmask 255.255.2 55.0 broadcast 192.168.10.255 up * 255.255.255.0 is the subnet mask. * After this, to make the changes permanent so that this IP is activated after every system reboot, a file has to be created called /etc/sysconfig/network-scripts/ifcfg-eth0 which wil l have contents like below for a static IP address configuration. DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.10.255 IPADDR=192.168.10.12 NETMASK=255.255.255.0 NETWORK=192.168.10.0 ONBOOT=yes * You can also use the commandline above to cha nge the main IP address of a machine.

Page 235: Linux-Training-Volume1

Adding more IP addresses to a machine * Using ifconfig, you can add more Ips to a mac hine using the commandline below $ ifconfig eth0:0 192.168.10.13 netmask 255.255 .255.0 broadcast 192.168.10.255 up * In this case, the file that needs to be creat ed is /etc/sysconfig/network-scripts/ifcfg-eth0:0 so that this IP is activated a fter system boot up. A sample file is given below. DEVICE=eth0:0 BOOTPROTO=static BROADCAST=192.168.10.255 IPADDR=192.168.10.13 NETMASK=255.255.255.0 NETWORK=192.168.10.0 ONBOOT=yes * If you are giving another IP, the file will b e ifcfg-eth0:1 and the command line will be : $ ifconfig eth0:1 192.168.10.14 netmask 255.255 .255.0 broadcast 192.168.10.255 up * Note : After making these changes, you need t o restart the network daemon using $ /etc/rc.d/init.d/network restart

Page 236: Linux-Training-Volume1

* The command ‘usernetctl’ can be used to a ctivate or de-activate a network interface. $ usernetctl eth0 up $ usernetctl eth0:1 up $ usernetctl eth0 down 7.2.2.2). Setting up Routing Routing A routing table is a simple set of rules that tells what will be done with network packets. * The destination address of every outgoing pac ket is checked against every line of the routing table maintained by the kernel; if a matching line is found then the packet is sent out through the interface l isted on that line of the table; if no match is found the system returns the error “Unreachable host.'' * The route command is the tool used to display or modify the routing table. * If you type "route" or “route –n†� for a machine having the IP 192.168.2.2 for eth0 , the routing table below will be displayed: $ route Destination Gateway Genmask Flags Metric Ref Us e Iface 192.168.2.2 * 255.255.255.255 UH 0 0 0 eth0 192.168.2.0 * 255.255.255.0 U 0 0 0 eth0 127.0.0.0 * 255.0.0.0 U 0 0 0 lo default 192.168.0.2 0.0.0.0 UG 0 0 0 eth0 * The last line which has the Genmask 0.0.0.0 i s the default route and the default gateway is set to 192.168.0.2. All packets to an unknown network are sent through the default route. *

Page 237: Linux-Training-Volume1

The routing table looks like a set of instruc tions, very similar to a case statement which has a "default" at its end and can be described as below for the above routing table setup. if (address=me) then send to me; elseif (address=my network) then send to my network ; elseif (address=my local) then send to my local int erface; else send to my gateway 192.168.0.2; Iface : Interface to which packets for this route w ill be sent. Setting Up Routing * The default gateway can be set using the rout e command using the command line below $ route add -net default gw 192.168.2.0 dev eth0 ( for a network) OR $ route add default gw 192.168.2.0 eth0 (for a mach ine ) * To setup routing for more than 2 network inte rfaces, ie if you have both eth0 as well as eth1, you may use the command lines below . $ route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.0.2 dev eth0 $ route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.0.3 dev eth1 * Note that in the above example the network 19 2.168.2.0 uses the gateway 192.168.0.2 and 192.168.1.0 is configured to use th e gateway 192.168.0.3 * The flags above mean the following: U - Route is up H -Only a single host can be reached through the ro ute. For example, this is the case for the loopback entry 127.0.0.1. G - Use gateway

Page 238: Linux-Training-Volume1

Deleting a Route * A route can be removed from a network using t he command line below $ route del -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.0.2 eth0 * For a standalone machine, it can be removed a s $ route del default gw 192.168.2.0 eth0 7.2.2.3). Network Monitoring/ Analysis Tools 1. Netstat : * Displays information about the systems curren tly active network connections, including port connections, routing ta bles, and more. * To display the routing table, use the option $ netstat –nr , netstat –r * n will show numerical addresses instead of sy mbolic hostnames * To get the list of programs or services liste ning on all the open ports on the system along with their process id or program n ame, use the option $ netstat –lpn * To display all connected sockets and the fore ign Ips from which the connection is coming from, use $ netstat –an * Using the –a flag by itself will display al l sockets from all families. *

Page 239: Linux-Training-Volume1

To see all connections from outside to httpd port 80, you may use $ netstat –an | grep 80 * To display the statistics for the network int erfaces currently configured $ netstat –i 2. Traceroute: * Used to determine the network route from your computer to some other computer on your network or the internet.It can be used with the hostname or the IP address. $ traceroute 216.239.39.99 OR $ traceroute google.com * Traceroute will list the series of hosts/gate ways through which your packets travel on their way to a given destination. 3. Ping * The IP protocol includes control messages cal led ICMP (Internet Control Message Protocol) packets. * One type of ICMP packet is called an “echo request'' , and the IP rules require its recipient to send back an “ echo repl y†�. * These are incredibly useful because you can d etermine o whether the remote host is up and talki ng to the network, o the time required for a packet to make a round-trip to the host, o

Page 240: Linux-Training-Volume1

By sending a few dozen echo requests, w hat fraction of the packets sent between the hosts get lost somewhere along the way. * The ping command sends echo requests to the h ost you specify on the command line, and lists the responses received in t heir round trip time. * When you terminate ping (probably by hitting control-C) it summarizes the results, giving the average round trip time and the percent packet loss. * This command is used constantly to determine whether there is a problem with the network connection between two hosts. * The ping command can be called with the hostn ame or the IP address $ ping google.com $ ping 216.239.39.99 4. arp * The ARP (Address Resolution Protocol) table n ormally uses an automatic mechanism to find what physical addresses go with w hich IP addresses. The arp command displays this table, and can be used to mod ify it, though this necessity is rare. * The commandline to display the arptable and a sample output is given below. $ arp -a IP address HW type HW address 172.16.1.3 10Mbps Ethernet 00:00:C0:5A:42:C1 172.16.1.2 10Mbps Ethernet 00:00:C0:90:B3:42 172.16.2.4 10Mbps Ethernet 00:00:C0:04:69:AA * The arp -s command can be used to change the IP address of a device. The syntax is: $ arp -s ip_address ethernet_address

Page 241: Linux-Training-Volume1

$ arp -s 220.0.0.182 00-40-af-36-0c-38 * The column HW address is the Ethernet, or MAC , address. A typical Ethernet address (also known as MAC address - Media Access C ontrol) looks like this: aa-bb-cc-dd-ee-ff where aa-bb-cc equals a number uniqu e to the manufacturer and dd-ee-ff equals a serial number. 5. tcpdump * Tcpdump is a command-line tool for monitoring network traffic. * Tcpdump can capture and display the packet he aders on a particular network interface or on all interfaces. Tcpdump can display all of the packet headers, or just the ones that match particular criteria. $ tcpdump * To print all packets arriving at or departing from the host educarma.com $ tcpdump host educarma.com 7.2.2.4) Changing the System Hostname * Use the command hostname. Hostname is the pro gram that is used to either set or display the current host, domain or node nam e of the system. These names are used by many of the networking programs to iden tify the machine. $ hostname * To change the hostname, use any of the option s below $ echo “hostname†� > /proc/sys/kernel/hostname $ sysctl –w kernel.hostname=educarma.com * Sysctl is used to change kernel parameters at runtime. The parameters available are those listed under /proc/sys/ * To make the change in hostname permanent, the new hostname has to be added to the file /etc/sysconfig/network using the entry below. HOSTNAME=<new hostname>

Page 242: Linux-Training-Volume1

7.2.2.5). Networking terms ARP - Address resolution protocol. Used to translat e hardware addresses (ethernet ports) and IP addresses and vice versa. U ses broadcast messages for resolution. BOOTP - A protocol used to allow client computers t o get their IP address from a BOOTP server. DHCP supercedes, though does not repl ace this protocol. DHCP - Dynamic Host Configuration Protocol, allows clients to get their IP addresses from a DHCP server. This system "leases" IP addresses to clients for limited periods of time. If the client has not used their IP address within the lease time, the IP address is free for re-assignmen t. ICMP - Internet Control Message Protocol. Part of t he IP layer. Communicates error messages and other messages that require atte ntion. IGMP - Internet Group Management Protocol. Protocol used to manage multicasting through routers. IP - Three kinds of IP addresses are unicast, broad cast and multicast. MBONE - Used to refer to a network that supports mu lticasting. NIS - Network information service, is a name servic e created by Sun Microsystems. NFS - Network file sharing, allows two Unix style c omputers to mount and access part or all of a file system on a remote computer. OSPF - Open Shortest Path First dynamic routing pro tocol intended as a replacement for RIP. PPP - Point to point protocol is a serial protocol commonly used to connect using a modem to the internet RARP - Reverse ARP, used for clients to determine t heir IP addresses. RIP - Routing Information Protocol, used by almost all TCP/IP implementation to perform dynamic routing. RPC - Remote procedure call is a set of function ca lls used by a client program to call functions in a remote server program. SLIP - Serial line internet protocol SMTP - Simple mail transport protocol, commonly use t as the mail message transport protocol. SNMP - Simple network management protocol. UDP - User Datagram Protocol, a transport layer pro tocol UUCP - Unix to Unix copy is a protocol that allows Unix computers to exchange files. 7.2.3. Packet Filtering Using Iptables

Page 243: Linux-Training-Volume1

* Iptable is a tool for packet filtering – th e process of controlling network packets as they enter, move through and exi t the network stack within the kernel. * Pre 2.4 kernels relied on ipchains. It is par t of the kernelspace netfilter project. * Using Linux and iptables / ipchains one can c onfigure a gateway which will allow all computers on a private network to connect to the internet via the gateway and one external IP address, using a techno logy called "Network Address Translation" (NAT) or masquerading. * Iptables/ipchains can also be configured so t hat the Linux computer acts as a firewall, providing protection to the internal network. 7.2.3.1). Network Address Translation (NAT) * An individual on a computer on the private ne twork may point their web browser to a site on the internet. This request is recognized to be beyond the local network so it is routed to the Linux gateway using the private network address. * The request for the web page is sent to the w eb site using the external internet IP address of the gateway. * The request is returned to the gateway which then translates the IP address to computer on the private network which ma de the request. This is often called IP masquerading. * The software interface which enables one to c onfigure the kernel for masquerading is iptables (Linux kernel 2.4) or ipch ains (Linux kernel 2.2) 7.2.3.2). Packet filtering tables The Linux kernel has the built-in ability to filter packets, allowing some of them into the system while stopping others. The 2.4 kernel's netfilter has three built-in tables or rules lists. 1. Filter - The default table for handling network packets. 2.

Page 244: Linux-Training-Volume1

Nat - Used to alter packets that create a new connection.Used for Network Address Translation. 3. Mangle - Used for specific types of pac ket alteration. * Each of these tables in turn has a group of b uilt-in chains which correspond to the actions performed on the packet b y the netfilter. The built-in chains of different tables are as shown below. 7.2.3.3). Built –In Chains for the different tabl es Chains available in Filter table INPUT — Applies to network packets that are targe ted for the host. OUTPUT — Applies to locally-generated network pac kets. FORWARD — Applies to network packets routed throu gh the host. Chains available in NAT table PREROUTING — Alters network packets when they arr ive. OUTPUT — Alters locally-generated network packets before they are sent out. POSTROUTING — Alters network packets before they are sent out. Chains available in Mangle table INPUT — Alters network packets targeted for the h ost. OUTPUT — Alters locally-generated network packets before they are sent out. FORWARD — Alters network packets routed through t he host. PREROUTING — Alters incoming network packets befo re they are routed. POSTROUTING — Alters network packets before they are sent out. * Every packet sent or received by a linux mach ine is subject to at least one table. Once the incoming packet is found matchi ng to a rule in the chain a target, or action is performed on them.

Page 245: Linux-Training-Volume1

7.2.3.4). Types of Targets Target is the action or policy to be taken with the corresponding packet. The types of targets which are available are : * ACCEPT - The packet skips the rest of the rul e checks and is allowed to continue to its destination * REJECT - If a rule specifies the optional REJ ECT target, the packet is dropped, but an error packet is sent to the packet' s originator. * DROP - Packet is refused access to the system and nothing is sent back to the host that sent the packet * QUEUE – The packet is passed to the user sp ace where it can be manipulated by the user programs. * RETURN - Handled by default targets * MARK - Used for error response. * MASQUERADE - Used with nat table and DHCP. * LOG - Log to file and specify error message. Every chain has a default policy to ACCEPT, DROP, R EJECT, or QUEUE. If none of the rules in the chain apply to the packet, then th e packet is dealt with in accordance with the default policy. 7.2.3.5). The Iptables Commandline * Rules that allow packets to be filtered by th e kernel are put in place by running the iptables command Command structure of Iptables $ iptables [-t <table-name>] <command> <chain-name> <parameter-1> <option-1> <parameter-n> <option-n> *

Page 246: Linux-Training-Volume1

<table name> - lets the user to select the ta ble ie Filter, NAT or Mangle. * <command> - Commands tell iptables to perform a specific action on the chosen table like Append, Check, Delete, Rename or Flush the table. Commonly used Iptable commands * -A : Appends the iptables rule to the end of the specified chain. This is the command used to simply add a rule when rule ord er in the chain does not matter. * -C : Checks a particular rule before adding i t to the user-specified chain. This command can help you construct complica ted iptables rules by prompting you for additional parameters and options . * -D : Deletes a rule in a particular chain by number (such as 5 for the fifth rule in a chain). You can also type the entir e rule, and iptables will delete the rule in the chain that matches it. * -E : Renames a user-defined chain. This does not affect the structure of the table. * -F : Flushes the selected chain, which effect ively deletes every rule in the the chain. If no chain is specified, this comma nd flushes every rule from every chain. * -h : Provides a list of command structures, a s well as a quick summary of command parameters and options. * -I : Inserts a rule in a chain at a point spe cified by a user-defined integer value. If no number is specified, iptables will place the command at the top of the chain. * -L : Lists all of the rules in the chain spec ified after the command. To list all rules in all chains in the default filter table, do not specify a chain or table. Otherwise, the following syntax should be used to list the rules in a specific chain in a particular table: $ iptables -L <chain-name> -t <table-name> $ iptables –L *

Page 247: Linux-Training-Volume1

-N : Creates a new chain with a user-specifie d name. * -P : Sets the default policy for a particular chain, so that when packets traverse an entire chain without matching a rule, t hey will be sent on to a particular target, such as ACCEPT or DROP. * -R : Replaces a rule in a particular chain. T he rule's number must be specified after the chain's name. The first rule in a chain corresponds to rule number one. * -X : Deletes a user-specified chain. Deleting a built-in chain for any table is not allowed. * -Z : Zeros the byte and packet counters in al l chains for a particular table. <chain-name> - A name for the table which could be user defined. <parameter-n> - Once certain iptables commands are specified, including those used to add, append, delete, insert, or replace rul es within a particular chain, parameters are required to construct a packet filte ring rule. For example, * -c command resets the counters for a particul ar rule. This parameter accepts the PKTS and BYTES options to specify what counter to reset. * -d : Sets the destination hostname, IP addres s, or network of a packet that will match the rule. When matching a network, the following IP address/n etmask formats are supported: o N.N.N.N/M.M.M.M — Where N.N.N.N is th e IP address range and M.M.M.M is the netmask. o N.N.N.N/M — Where N.N.N.N is the IP a ddress range and M is the netmask. o -f — Applies this rule only to fragme nted packets. o By using the ! option after this parame ter, only unfragmented packets will be matched.

Page 248: Linux-Training-Volume1

* -i : Sets the incoming network interface, suc h as eth0 or ppp0. With iptables, this optional parameter may only be used with the INPUT and FORWARD chains when used with the filter table and the PRER OUTING chain with the nat and mangle tables. This parameter also supports the following special options: o ! — Tells this parameter not to match , meaning that any specified interfaces are specifically excluded from this rule .For eg: -i ! eth0, would match all incoming interfaces, except eth0. o + — A wildcard character used to matc h all interfaces which match a particular string. For example, the parameter -i eth+ would apply this rule to any Ethernet interfaces but exclude any other inter faces, such as ppp0. o If the -i parameter is used but no inte rface is specified, then every interface is affected by the rule. * -j : Tells iptables to jump to a particular t arget when a packet matches a particular rule. Valid targets to be used after the -j option include the standard options, ACCEPT, DROP, QUEUE, and RETURN, as well as extended options that are available through modules loaded by defaul t with the Red Hat Linux iptables RPM package, such as LOG, MARK, and REJECT , among others. You may also direct a packet matching this rule to a user-defined chain outside of the current chain so that other rules can be app lied to the packet. If no target is specified, the packet moves past th e rule with no action taken. However, the counter for this rule is still increas ed by one, as the packet matched the specified rule. * -o : Sets the outgoing network interface for a rule and may only be used with OUTPUT and FORWARD chains in the filter table, and the POSTROUTING chain in the nat and mangle tables. This parameter's options are the same as those of the incoming network interface parameter (-i). * -p : Sets the IP protocol for the rule, which can be either icmp, tcp, udp, or all, to match every supported protocol. In addition, any protocols listed in /etc/protocols may also be used. If this option is omitted when creating a rule, the all option is the default. * -s : Sets the source for a particular packet using the same syntax as the destination (-d) parameter. We could also invert th e match with an !. If we

Page 249: Linux-Training-Volume1

were, in other words, to use a match in the form of --source ! 192.168.0.0/24, we would match all packets with a source address no t coming from within the 192.168.0.x range. Match Options * Different network protocols provide specializ ed matching options which may be set in specific ways to match a particular packe t using that protocol. * The protocol must first be specified in the i ptables command, by using -p tcp <protocol-name> (where <protocol-name> is the t arget protocol), to make the options for that protocol available. * TCP Protocol – TCP Protocol is specified us ing the option –p tcp and the match options available for tcp is as shown bel ow, * --dport : Sets the destination port for the p acket. Use either a network service name (such as www or smtp), port number, or range of port numbers to configure this option. The --destination-port match option is synonymous with --dport. o To specify a specific range of port num bers, separate the two numbers with a colon (:), such as -p tcp --dport 30 00:3200. The largest acceptable valid range is 0:65535. o Use an exclamation point character (!) after the --dport option to tell iptables to match all packets which do not use that network service or port, such as -p tcp --dport ! 80. * --sport : Sets the source port of the packet using the same options as --dport. The --source-port match option is synonymous with --sport. * --syn : Applies to all TCP packets designed t o initiate communication, commonly called SYN packets. Any packets that carry a data payload are not touched. Placing an exclamation point character (!) as a flag after the --syn option causes all non-SYN packets to be matched. Eg : iptables -p tcp ! --syn * --tcp-flags — Allows TCP packets with speci fic bits, or flags, set to be matched with a rule. The --tcp-flags match option a ccepts two parameters. The

Page 250: Linux-Training-Volume1

first parameter is the mask, which sets the flags t o be examined in the packet. The second parameter refers to the flag that must b e set in order to match. The possible flags are: ACK , FIN , PSH, RST, SYN, URG , ALL, NONE o For example, an iptables rule which con tains -p tcp --tcp-flags ACK,FIN,SYN SYN will only match TCP packets that ha ve the SYN flag set and the ACK and FIN flags unset. o Using the exclamation point character ( !) after --tcp-flags reverses the effect of the match option. o For eg: iptables -p tcp --tcp-flags ! S YN,FIN,ACK * --tcp-option — Attempts to match with TCP-s pecific options that can be set within a particular packet. This match option c an also be reversed with the exclamation point character (!). A TCP Option is a specific part of the header. Target Options * Once a packet has matched a particular rule, the rule can direct the packet to a number of different targets that decide its fate and, possibly, take additional actions. * Each chain has a default target, which is use d if none of the rules on that chain match a packet or if none of the rules w hich match the packet specify a target. The following are the standard targets: * <user-defined-chain> : Replace <user-defined- chain> with the name of a user-defined chain within the table. This target pa sses the packet to the target chain. * ACCEPT — Allows the packet to successfully move on to its destination or another chain. * DROP — Drops the packet without responding to the requester. The system that sent the packet is not notified of the failure . *

Page 251: Linux-Training-Volume1

QUEUE — The packet is queued for handling b y a user-space application. * RETURN — Stops checking the packet against rules in the current chain. If the packet with a RETURN target matches a rule i n a chain called from another chain, the packet is returned to the first chain to resume rule checking where it left off. If the RETURN rule is used on a built- in chain and the packet cannot move up to its previous chain, the default t arget for the current chain decides what action to take. Rules created with the iptables command are stored in memory. If the system is restarted after setting up iptables rules, they wil l be lost. In order for netfilter rules to persist through system reboot, t hey need to be saved. To do this, log in as root and type: $ /sbin/service iptables save * The next time the system boots, the iptables init script will reapply the rules saved in /etc/sysconfig/iptables by using the /sbin/iptables-restore command. * The rules in the iptables can be seen by usin g $ iptables –L * To flush all the rules in filter or nat table s, use $ iptables --flush $ iptables --table nat –flush * To stop/start/restart iptables $ /etc/rc.d/init.d/iptables stop/start/restart * To delete all chains that are not in default filter and nat table. $ iptables --delete-chain $ iptables --table nat --delete-chain

Page 252: Linux-Training-Volume1

* To deny all connections from a specific host $ iptables -I INPUT -s XXX.XXX.XXX.XXX -j DROP * For Debugging and Logging add the lines below to iptables and you can see the messages in /var/log/messages. $ iptables -A INPUT -j LOG --log-prefix "INPUT_DROP : " $ iptables -A OUTPUT -j LOG --log-prefix "OUTPUT_DR OP: " * To disallow access to port 80 from the IP add ress 212.160.2.4, you can use $ iptables –A INPUT –p tcp –dp 80 –s 212.16 0.2.4 –j DROP Here you are adding a rule to the INPUT chain which is dropping all packets to port 80 on your machine from the IP address 212.160 .2.4 * To disallow access to the smtp server from th e network 212.160.2.0, you can use $ iptables –A INPUT –p tcp –dp 25 –s 212.16 0.2.0/24 –j DROP * To disallow access to the smtp server from th e network 212.160.0.0, you can use $ iptables –A INPUT –p tcp –dp 25 –s 212.16 0.0.0/16 –j DROP * The -d 0.0.0.0/0 refers to all or any destina tion address of packet. *

Page 253: Linux-Training-Volume1

To view the rules along with the rule numbers so that its easier to delete a rule from the chain $ iptables –L –line-numbers * To delete rule no 2 from the INPUT chain from the default filter table $ iptables –D INPUT 1 * To setup routing so that all packets from the network 192.168.10.0/24 is altered to be routed from the public IP 202.15.20.1 98, use the commandline below to add a rule to the POSTROUTING table $ iptables –t NAT –A POSTROUTING –s 192.168.1 0.0/24 –j SNAT –to-source 202.15.20.198 * SNAT or SOURCE NAT : stores internal IP in th e NAT table and route to and fro traffic to correct IP on the internal network. 7.3. Network Information Service (NIS) The Network Information Service (NIS) provides a si mple network lookup service consisting of databases and processes. It was forme rly known as Sun Yellow Pages (YP). Its purpose is to provide information, that has to be known throughout the network, to all machines on the network. Informatio n likely to be distributed by NIS are: * Login names/passwords/home directories (/etc/ passwd) * Group information (/etc/group) * Host names and IP numbers (/etc/hosts)

Page 254: Linux-Training-Volume1

For example, if your password entry is recorded in the NIS password database, you will be able to login on all machines on the ne t which have the NIS client programs running. 7.3.1. NIS Maps * NIS client programs query the NIS servers for data which is stored in its databases, which are known as maps. * NIS maps are stored in DBM format which is a binary format based on simple ASCII files. * ASCII to DBM conversion can be done by using the makedbm command. 7.3.2. NIS Domain An NIS domain refers to a group of systems in a net work or subnet which use the same NIS Map. 7.3.2.1). NIS Topologies used 1. A single domain with a master server and one or more clients. 2. A single domain with one master server, one o r more slave NIS servers and one or more clients. 3. Multiple domains with its own master server, no slave servers and one or more clients. 4. Multiple domains with its own master server, its own slave servers and one or more clients.

Page 255: Linux-Training-Volume1

7.3.3. NIS Server Installation and Configuration 7.3.3.1). Installing the NIS Server utility * There are two NIS packages and the portmap se rver that needs to be installed for the NIS server to work on a machine. o ypserv o yp-tools o portmap (if not already installed). * The NIS utilities – ypserv and yp-tools can be found at, Site Directory File Name ftp.kernel.org /pub/linux/utils/net/NIS ypserv-2.9. tar.gz ftp.kernel.org /pub/linux/utils/net/NIS yp-tools-2. 9.tar.gz * Compile the NIS softwares ( ypserv and yp-too ls) to generate the ypserv and makedbm. Makedbm program converts the ascii for mat database files into dbm format. * NIS server configuration involves the followi ng steps, 1. Setting up the NIS domain name. 2. Configuring and starting the NIS server deamo n ypserv

Page 256: Linux-Training-Volume1

3. Initializing the NIS Maps 4. Starting the NIS password deamon 5. Starting the NIS transfer deamon ( If you are using slave servers) 6. Modifying the startup process to start the NI S deamon when the system reboots. 7.3.3.2). Setting up the NIS domain name * To set up the NIS domain name, give the entry below at the shell prompt. $ nisdomainname <domainname> eg: $ nisdomainname carmatrain.com * Next reissue the nisdomainname command to con firm that the nis domain is set. This is a temporary arrangement. To make this permanent, add the entry NISDOMAIN=nisdomainname in the /etc/sysconfig/netwo rk file. 7.3.3.3). Configuring and starting the deamon ypser v * With the NIS domain name set you can start th e NIS server deamon. The key configuration files are, 1. /var/yp/securenets

Page 257: Linux-Training-Volume1

It contains the netmasks and the network number pai rs that defines the list of hosts permitted to access the NIS server. 255.255.255.0 192.168.0.0 2. /etc/ypserv.conf ( Configuration for th e primary NIS server deamon and the NIS transfer deamon ypxfrd). It contains ru ntime configuration options called option line, for ypserv, and host access inf ormation, called access rules. Default values in /etc/ypserv.conf is sufficient fo r most of the NIS server configurations. dns: no *:shadow.byname:port:yes *:passwd.adjunct.byname:port:yes * Entries in the file appear one per line. Each line is made up of colon separated fields defining an option line or an acce ss rule with the format, Option:[yes/no] * Options can be either dns or xfr_check_port. * dns controls whether or not the NIS server pe rforms a dns lookup for hosts not listed in the host maps. The default is no. * xfr_check_port controls whether the ypserv ru ns on a port numbered less than 1023, a so called privileged port. The default is yes. * Access rules have a slightly complicated form at.

Page 258: Linux-Training-Volume1

Host:map:security:mangle[:field] * Host – the ip address to match. Wild cards are also allowed. * Map – the name of a map to match for . l* f or all the maps. * Security – The type of security to use. Can be one of none, port, deny or des. o none enables always access to hosts. Ma ngle the passwd field if so configured, default is not. o Port enables access if the connection i s coming from a privileged port (<1024) . If mangle is set to yes, access is e nabled, but the password field is mangled. If mangle is set to no, access is denied. o Deny denies the matching host access to this map. o Des requires des authentication. * Mangle – possible values are "yes" or "no". If "yes", the field entry will be mangled. Mangling means that the field is r eplaced by 'x' if the port check reveals the request originated from an unpriv iliged port. If set to no, field is not mangled if the requesting port is unpr ivileged. * Field – the field number is the map to mang le. The default value if the field is not specified is 2, which corresponds to t he password field in /etc/group, /etc/shadow, and /etc/passwd. * Access rules are tried in order, and all rule s are evaluated. If no rule matches a connecting host, access to the correspond ing map is enabled.

Page 259: Linux-Training-Volume1

* For NIS to work, port mapper should be runnin g. Port map translates the RPC port numbers and program numbers to TCP/IP port numbers. You can check the status of port map by running the command, $ /sbin/service portmap status Which should show an output like, Portmap (pid 559) running …. If its not running you can start the same by issuin g the command $ /sbin/service portmap start Once the portmap is started you can start the NIS s erver by issuing the command, $ /sbin/service ypserv start Once the ypserv daemon is started, the command $ rpcinfo -u localhost ypserv should given an output like below program 100004 version 1 ready and waiting program 100004 version 2 ready and waiting 7.3.3.4). Initializing the NIS Maps * Now you need to generate the password databas e using ypinit, which would generate the complete set of NIS maps and places th em in the directory /var/yp named by the nisdomain. To generate the NIS database issue the command,

Page 260: Linux-Training-Volume1

$ /usr/lib/yp/ypinit –m The –m option is used to indicate that its creati ng maps for the master server. If you are using a slave server for redundancy then , make sure that ypwhich -m works from each of them. This means, that your slav e must be also configured as NIS clients. To create a slave server using the databases from t he master server named masterhost, use /usr/lib/yp/ypinit -s masterhost 7.3.3.5). Starting the NIS Password Deamon When new users are added or deleted the NIS clients and slaves should be notified of this change. The deamon that handles th is change is yppasswdd. * Yppasswdd handles password changes and updati ng other NIS information that depends on user passwords. * This daemon runs only on the NIS master serve r. To start this, $ /sbin/service yppasswdd start It runs only on the NIS master server. 7.3.3.6). Starting the Server Transfer deamon Ypxfrd is used to speed up the transfer of large ma ps from the NIS master to the slave servers. $ /sbin/service ypxfrd start 7.3.3.7). Modifying the startup process to start NI S at Boot *

Page 261: Linux-Training-Volume1

Firstly, to permanently save the NIS domain n ame, add the line below to /etc/sysconfig/network. NISDOMAIN=carmatec.com * Run the GUI tool “serviceconf†� which is the RedHat service configuration tool to configure the NIS daemons to start at boot time. After starting serviceconf, goto Main Menu -->System Sett ings ïƒ Server Settings ïƒ Server Settings ïƒ Services. Enable the checkb ox for ypserv and yppasswdd services. 7.3.4). Installing and Configuring the NIS Client 7.3.4.1). Installing the ypbind utility The NIS client requires the ypbind package to be in stalled on it as well as the portmapper server running. * The ypbind daemon binds NIS clients to an NIS domain. Ypbind must be running on any machine running NIS client programs. * The ypbind software is also available from http://ftp.kernel.org/pub/linux/utils/net/NIS/ * Compile and install the software as per the i nstructions inside. * Install the portmapper package also if its no t already installed on the server. * After this, the NIS client needs to be config ured , the steps for which are given below: 1.

Page 262: Linux-Training-Volume1

Set up the NIS domain name. 2. Configure and start the NIS client deamon. 3. Test the client deamon. 4. Configure the client startup files to use NIS . 5. Reboot the client. 7.3.4.2). Setting up the NIS domain name Add the entry in the /etc/sysconfig/network file as NISDOMAIN=<nisdomainname> For example, To set the NIS domain as carmatec.com, you may give NISDOMAIN=carmatec.com 7.3.4.3). Configure and start the NIS client deamon * The NIS client deamon ypbind uses the configu ration file /etc/yp.conf that specifies which NIS servers’ clients should use a nd how to locate them. ypserver <nisserverip> * Valid entries are + domain NISDOMAIN server HOSTNAME : Use server HOSTNAME for the domain NISDOMAIN. + domain NISDOMAIN broadcast : Use broadcast on the local net for domain NISDOMAIN. + ypserver HOSTNAME : Use server HO STNAME for the local domain. The IP-address of server must be listed in /etc/hos ts.

Page 263: Linux-Training-Volume1

* A sample entry can be ypserver 192.168.0.2 OR domain educarma.com server 192.168.0.2 * The same thing above can also be done using a GUI tool called authconfig. Now start the NIS client by issuing the command, $ /sbin/service ypbind start 7.3.4.4). Test the Client daemon * The commandline below using rpcinfo will let you confirm that ypbind was able to register its service with the portmapper. $ rpcinfo –u 192.168.0.2 ypbind * The commandline below can be used to check if the portmapper is running $ rpcinfo –p 192.168.0.2 * Now edit /etc/host.conf file to use NIS for p assword lookup, ie change the order to the entry below order hosts,nis,bind * The configuration above means that the namese rvice lookups will first query /etc/hosts, then NIS and then user BIND, the nameserver.

Page 264: Linux-Training-Volume1

* Lastly, edit the /etc/nssswitch.conf and add the entries shown below if not already present. passwd: files nis shadow: files nis group: files nis hosts: files nis 7.3.4.5). Configuring the NIS Client startup files * After configuring the NIS server, you need to make sure that the client daemon ypbind starts and stops when the system star ts and stops. * This can be done by checking the daemon ‘yp bind’ in the Service Configuration Tool which can opened using the comma nd “serviceconf†� $ serviceconf * Save the changes after checking ypbind and NI S Client services will be up and running after a system reboot. * Reboot the server to make sure the NIS Client daemon starts. 7.3.4.6). NIS Configuration Files/Commands NIS File/Command Description/Usage ypwhich Displays the name of the master NIS server $ ypwhich ypcat

Page 265: Linux-Training-Volume1

Prints the entries in an NIS database $ ypcat –x (To check options) $ ypcat passwd ( To see entries from the map “pas swd.byname†� ) yppasswd Changes user passwords and info on the NIS server $ yppasswd carma yppoll Displays the server and version no of an NIS map $ yppoll -h 192.168.0.2 passwd.byname ypmatch Prints the value of one or more entries in an NIS m ap /etc/yp.conf Configures the NIS client bindings /etc/nsswitch.conf Configures the system name database lookup /etc/host.conf Configures host name resolution 7.3.5. More about NIS * Within a network which has NIS setup, there m ust be at least one machine acting as a NIS server. *

Page 266: Linux-Training-Volume1

You can have multiple NIS servers, each servi ng different NIS "domains" - or you can have co operating NIS servers, where one is the master NIS server, and all the other are so-called slave NIS servers ( for a certain NIS "domain", that is!) - Or you can have a mix of them. * To have the NIS work you need to run the prog ram portmap which is available at /sbin/portmap. * Portmap is a program which converts RPC port numbers to TCP/IP port numbers. To make RPC calls you need to have Portmap running, which is a pre requisite for the NIS clients and servers to work a s they rely on RPC method of communication. * When an RPC server is started, it will tell p ortmap what port number it is listening to, and what RPC program numbers it is pr epared to serve. * When a client wishes to make an RPC call to a given program number, it will first contact portmap on the server machine to determine the port number where RPC packets should be sent. 7.4. Network File Systems (NFS) The Network File System (NFS) was developed to allo w machines to mount a disk partition on a remote machine as if it were on a lo cal hard drive. * This allows for fast, seamless sharing of fil es across a network. * There are three main configuration files you will need to edit to set up an NFS server: 1. /etc/exports 2. /etc/hosts.allow

Page 267: Linux-Training-Volume1

3. /etc/hosts.deny 7.4.1. Main Configuration Files 7.4.1.1). /etc/exports file /etc/exports file contains a list of entries, each entry indicates a volume that is shared and how its shared. An entry in /etc/exports will typically look like t his: directory machine1(option11,option12) machine2(opti on21,option22)] where directory the directory that you want to share. It may be an entire volume though it need not be. If you share a directory, then all director ies under it within the same file system will be shared as well. machine1 and machine2 client machines that will have access to the direct ory. The machines may be listed by their DNS address or their IP address (e. g., machine.company.com or 192.168.0.8). Using IP addresses is more reliable a nd more secure. optionxx The option listing for each machine will describe w hat kind of access that machine will have. Important options are: * ro: The directory is shared read only; the cl ient machine will not be able to write to it. This is the default. * rw: The client machine will have read and wri te access to the directory. * no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. * Exactly which UID the request is mapped to de pends on the UID of user "nobody" on the server, not the client. *

Page 268: Linux-Training-Volume1

If no_root_squash is selected, then root on t he client machine will have the same level of access to the files on the system as root on the server. * This can have serious security implications, although it may be necessary if you want to perform any administrative work on t he client machine that involves the exported directories. You should not s pecify this option without a good reason. * no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is reque sted from the client is in the appropriate part of the volume. If the entire v olume is exported, disabling this check will speed up transfers. * sync: By default, all but the most recent ver sion (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, it has been written to stable storage - when NFS has finished handing the write over to the file sys tem. This behavior may cause data corruption if the server reboots, and the sync option prevents this. Eg entry: /var/tmp 192.168.0.3(async,w) 7.4.1.2). /etc/hosts.allow and /etc/hosts.deny These two files specify which computers on the netw ork can use services on your machine. Each line of the file contains a single en try listing a service and a set of machines. When the server gets a request fro m a machine, it does the following: * It first checks /etc/hosts.allow to see if th e machine matches a description listed in there. If it does, then the m achine is allowed access. * If the machine does not match an entry in hos ts.allow, the server then checks hosts.deny to see if the client matches a li sting in there. If it does then the machine is denied access. * If the client matches no listings in either f ile, then it is allowed access. Configuring /etc/hosts.allow and /etc/hosts.deny fo r NFS security * In addition to controlling access to services handled by inetd (such as telnet and FTP), this file can also control access to NFS by restricting

Page 269: Linux-Training-Volume1

connections to the daemons that provide NFS service s. Restrictions are done on a per-service basis. * The first daemon to restrict access to is the portmapper. This daemon essentially just tells requesting clients how to fi nd all the NFS services on the system. * Restricting access to the portmapper is the b est defense against someone breaking into your system through NFS because compl etely unauthorized clients won't know where to find the NFS daemons. * However, there are two things to watch out fo r. First, restricting portmapper isn't enough if the intruder already kno ws for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict requests to NIS. In general it i s a good idea with NFS (as with most internet services) to explicitly deny acc ess to IP addresses that you don't need to allow access to. * The first step in doing this is to add the fo llowng entry to /etc/hosts.deny: portmap:ALL * If you have a newer version of nfs-utils, add entries for each of the NFS daemons in hosts.deny: lockd:ALL mountd:ALL rquotad:ALL statd:ALL * Some sys admins choose to put the entry ALL:A LL in the file /etc/hosts.deny, which causes any service that look s at these files to deny access to all hosts unless it is explicitly allowed . *

Page 270: Linux-Training-Volume1

Next, we need to add an entry to hosts.allow to give any hosts access that we want to have access. (If we just leave the above lines in hosts.deny then nobody will have access to NFS.) Entries in hosts.a llow follow the format service: host [or network/netmask] , host [or n etwork/netmask] * Here, host is IP address of a potential clien t; it may be possible in some versions to use the DNS name of the host, but it is strongly discouraged. * Suppose we have the setup above and we just w ant to allow access to 192.168.0.1 and 192.168.0.2, respectively. We could add the following entry to /etc/hosts.allow: portmap: 192.168.0.1 , 192.168.0.2 * For recent nfs-utils versions, we would also add the following (again, these entries are harmless even if they are not sup ported): lockd: 192.168.0.1 , 192.168.0.2 rquotad: 192.168.0.1 , 192.168.0.2 mountd: 192.168.0.1 , 192.168.0.2 statd: 192.168.0.1 , 192.168.0.2 * If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask st yle entries in the same manner as /etc/exports above. 7.4.2. NFS Server Setup 7.4.2.1). Pre-requisites The NFS server should now be configured and firstly , you will need to have the appropriate packages installed. This consists mainl y a kernel which supports NFS and the nfs-utils package. * NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started using $ /sbin/service portmap start *

Page 271: Linux-Training-Volume1

Most recent Linux distributions start this da emon in the boot scripts, but it is worth making sure that it is running before y ou begin working with NFS using $ /sbin/service portmap status 7.4.2.2). The NFS Daemons and starting them Providing NFS services requires the service of six daemons. 1. portmap : Enables NFS clients to discover the NFS services available on a given NFS server. 2. nfsd : Provides all NFS services except file locking and quota management. 3. lockd : Starts the kernels NFS lock manager 4. statd : Implements NFS lock recovery when an NFS server system crashes 5. rquotad : Handles user file quotas on exporte d volumes to NFS clients. 6. mountd : Processes NFS client mount requests * The daemons are all part of the nfs-utils pac kage, and may be either in the /sbin directory or the /usr/sbin directory. * If your distribution does not include them in the startup scripts, then , you should add them and configure it to start in th e following order: 1. portmap 2. nfsd 3. mountd 4. statd 5.

Page 272: Linux-Training-Volume1

rquotad ( if necessary) * lockd is started by nfsd on an as-needed basi s so there is no need to invoke it manually. * The nfs-utils package has a sample startup sc ript for RedHat and the script will take care of starting all the NFS serve r daemons for you except the portmapper. $ /etc/rc.d/init.d/nfs start/stop/status/restart * Hence if you need to restart nfs manually, th e order to do so is $ /etc/rc.d/init.d/portmap start $ /etc/rc.d/init.d/nfs start $ /etc/rc.d/init.d/nfslock start 7.4.2.3). Verifying that NFS is running To do this, query the portmapper with the command r pcinfo -p to find out what services it is providing. You should get something like this: $ rpcinfo –p portmapper program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 749 rquotad 100011 2 udp 749 rquotad 100005 1 udp 759 mountd 100005 1 tcp 761 mountd 100005 2 udp 764 mountd 100005 2 tcp 766 mountd 100005 3 udp 769 mountd

Page 273: Linux-Training-Volume1

100005 3 tcp 771 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 300019 1 tcp 830 amd 300019 1 udp 831 amd 100024 1 udp 944 status 100024 1 tcp 946 status 100021 1 udp 1042 nlockmgr 100021 3 udp 1042 nlockmgr 100021 4 udp 1042 nlockmgr 100021 1 tcp 1629 nlockmgr 100021 3 tcp 1629 nlockmgr 100021 4 tcp 1629 nlockmgr * .name for lockd) versions 1, 3, and 4. There are also different service listings depending on whether NFS is travelling ove r TCP or UDP. * If you do not at least see a line that says p ortmapper, a line that says nfs, and a line that says mountd then you will need to backtrack and try again to start up the server. * If you do see these services listed, then you should be ready to set up NFS clients to access files from your server. 7.4.2.4). Making changes to /etc/exports later on * If you come back and change your /etc/exports file, the changes you make may not take effect immediately. * You should therefore run the command exportfs -ra to force nfsd to re-read the /etc/exports file. If you can't find the expo rtfs command, then you can kill nfsd and restart it. * Exportfs command will also let you manipulate the list of available exports or list the currently exported file systems

Page 274: Linux-Training-Volume1

$ exportfs –v // List currently exported file sys tems $ exportfs –v –u 192.168.0.4:/home //Remove an exported file system 7.4.3. Setting up an NFS Client 7.4.3.1). Mounting remote directories * Firstly, the kernel on the client machine nee ds to be compiled with NFS support. * The portmapper should be running on the clien t machine machine, and to use NFS file locking, you also need statd and lockd run ning on both the client and the server. * With portmap, lockd, and statd running, you s hould now be able to mount the remote directory from your server just the way you mount a local hard drive, with the mount command. * Suppose our NFS server is called master.carma .com,and we want to mount the /home directory on slave.carma.com, use the command line below for mounting on slave.carma.com. $ mount –t nfs master.carma.com:/home /home1 OR $ mount -t nfs 192.168.0.2:/home /home1 –o †“rw,soft * And the directory /home on master will appear as the directory /home1 on slave.carma.com. Note that this assumes we have cre ated the directory /home1 as an empty mount point beforehand on slave.carma.com * You can get rid of the file system mounted vi a nfs using just like you would for a local file system. $ umount /home1 7.4.3.2). Getting NFS File Systems to Be Mounted at Boot Time * NFS file systems can be added to your /etc/fs tab file the same way local file systems can, so that they mount when your syst em starts up. *

Page 275: Linux-Training-Volume1

The only difference is that the file system t ype will be set to nfs and the dump and fsck order (the last two entries) will have to be set to zero. So for our example above, the entry in /etc/fstab woul d look like: device mountpoint fs-type options dump fsckorde r master.carma.com:/home /home1 nfs rw 0 0 7.4.3.3). Options for Mounting Soft vs. Hard Mounting There are some options which govern the way the NFS client handles a server crash or network outage. One of the cool things abo ut NFS is that it can handle this gracefully if you set up the clients right. Th ere are two distinct failure modes: * soft If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. * hard The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted o r killed (except by a "sure kill") unless you also specify intr. When the NFS s erver is back online the program will continue undisturbed from where it was . We recommend using hard,intr on all NFS mounted file systems. Picking up from previous example, the fstab entry w ould now look like: device mountpoint fs-type options dump fsckord ... master.carma.com:/home /home1 nfs rw,hard,intr 0 0 ... Setting Block Size to Optimize Transfer Speeds * The rsize and wsize mount options specify the size of the chunks of data that the client and server pass back and forth to e ach other. *

Page 276: Linux-Training-Volume1

rsize=n will set the NFS read buffer size to n bytes ( default is 4096) * wsize=n will set the NFS write buffer size to n bytes ( “ ) * While mounting manually, the mount options ca n be specified as below $ mount –t nfs 192.168.0.2:/home /home1 –o rsiz e=8292, wsize=8192, hard,intr,nolock * intr will allow signals such as Ctrl-C to int errupt a failed NFS file operation if the file system is mounted with the ha rd option and hence its used with the hard option. * nolock disables NFS locking and stops the sta td and lockd daemons and lock will enable it. 7.4.4. Using Automount services (Autofs) * The easiest way for client systems to mount N FS exports is to use autofs, which automatically mounts file systems not already mounted when the file system is first accessed. * Autofs uses the automated daemon to mount and unmount file systems that automount has been configured to control. * The automount daemons automatically mounts fi lesystems and unmounts them after a period of inactivity thereby saving a lot o f resources. * For autofs to work, you need the kernel suppo rt for autofs and the autofs package installed on the system. 7.4.4.1). Autofs Setup * Autofs uses a set of map files to control aut omounting and a master map file which is called /etc/auto.master which assosci ates mount points with secondary map files that control the file systems m ounted under the corresponding mount points. * For example, consider the following /etc/auto .master config file:

Page 277: Linux-Training-Volume1

/home /etc/auto.home /var /etc/auto.var –timeout 600 * This file assosciates the secondary map file /etc/auto.home with the mount point /home and the map file /etc/auto.var with the /var mount point. * Thus, auto.home defines filesystems mounted u nder /home and auto.var defines file systems mounted under /var. * Hence each file in the master map file has 3 fields : mountpoint, full path to secondary map file and options that control the behaviour of the automount daemon which is optional. * Here , --timeout=600 means after every 600 se cs/10 mins of inactivity, the /var mount point will be unmounted automatically. The Secondary Map Files The secondary map file has the general syntax below : localdir [-options] remotefs * localdir refers to the directory beneath the mount point where the NFS mount will be mounted. * remotefs is the host and pathname of the NFS mount * options can be anything like rw,ro,soft,hard, intr,rsize,wsize etc Consider a sample auto.home file which is used to m ount /home from the host 192.168.0.2 carma -rw,hard,intr 192.168.0.2:/home/carma * If /home/carma exist on the local system, itâ €™ll be temporarily replaced by the contents of the NFS mount.

Page 278: Linux-Training-Volume1

If the entire /home directory needs to be mounted f rom the NFS server, it can be done using some wild card characters as below. * -rw,hard,intr 192.168.0.2:/home/& o The above line states that any director y a user tries to access under the local /home directory (due to the asteris k character) should result in an NFS mount on the 192.168.0.2 system within its e xported /home filesystem. 7.4.4.2). Starting and Stopping Autofs * The Autofs service can be started by the root user using $ /sbin/service autofs start * To check the status of autofs, use the option $ /sbin/service autofs status * After changing a map file, the configuration can be reloaded using $ /sbin/service autofs reload 7.5. TCP Wrappers and Xinetd Services TCP wrappers provide access control to a variety of services. Most modern network services, such as SSH, Telnet, and FTP, mak e use of TCP wrappers, which stands guard between an incoming request and the re quested service. The benefits offered by TCP wrappers are enhanced w hen used in conjunction with xinetd, a super service that provides * additional access * logging

Page 279: Linux-Training-Volume1

* binding * redirection, and * resource utilization control. 7.5.1. TCP Wrappers * The TCP wrappers package (tcp_wrappers) is in stalled by default under Red Hat Linux and provides host-based access control to network services. * The most important component within the packa ge is the /usr/lib/libwrap.a library. In general terms, a TCP wrapped service is one that has been compiled against the libwrap.a library. * When a connection attempt is made to a TCP wr apped service, the service first references the hosts access files (/etc/hosts .allow and /etc/hosts.deny) to determine whether or not the client host is allo wed to connect. * It then uses the syslog daemon (syslogd) to w rite the name of the requesting host and the requested service to /var/l og/messages. * If a client host is allowed to connect, TCP w rappers release control of the connection to the requested service and do not interfere further with communication between the client host and the serve r. * In addition to access control and logging, TC P wrappers can activate commands to interact with the client before denying or releasing control of the connection to the requested network service. *

Page 280: Linux-Training-Volume1

Because TCP wrappers are a valuable addition to any server administrator's arsenal of security tools, most network services wi thin Red Hat Linux are linked against the libwrap.a library. * Some such applications include /usr/sbin/sshd , /usr/sbin/sendmail, and /usr/sbin/xinetd. 7.5.1.1). Advantages of TCP Wrappers TCP wrappers provide the following advantages over other network service control techniques: 1. Transparency to both the client host an d the wrapped network service. Both the connecting client and the wrapped network service are unaware that TCP wrappers are in use. Legitimate users are logged and connected to the requested service while connections from banned cli ents fail. 2. Centralized management of multiple prot ocols. — TCP wrappers operate separately from the network services they p rotect, allowing many server applications to share a common set of configuration files for simpler management. 7.5.1.2). TCP Wrappers Configuration Files To determine if a client machine is allowed to conn ect to a service, TCP wrappers reference the following two files, which a re commonly referred to as hosts access files: 1. /etc/hosts.allow 2. /etc/hosts.deny When a client request is received by a TCP wrapped service, it takes the following basic steps: * The service references /etc/hosts.allow. — The TCP wrapped service sequentially parses the /etc/hosts.allow file and a pplies the first rule

Page 281: Linux-Training-Volume1

specified for that service. If it finds a matching rule, it allows the connection. If not, it moves on to step 2. * The service references /etc/hosts.deny. — T he TCP wrapped service sequentially parses the /etc/hosts.deny file. If it finds a matching rule is denies the connection. If not, access to the servic e is granted. The following are important points to consider when using TCP wrappers to protect network services: * Because access rules in hosts.allow are appli ed first, they take precedence over rules specified in hosts.deny. * Therefore, if access to a service is allowed in hosts.allow, a rule denying access to that same service in hosts.deny i s ignored. * Since the rules in each file are read from th e top down and the first matching rule for a given service is the only one a pplied, the order of the rules is extremely important. * If no rules for the service are found in eith er file, or if neither file exists, access to the service is granted. * TCP wrapped services do not cache the rules f rom the hosts access files, so any changes to hosts.allow or hosts.deny take ef fect immediately without restarting network services. Formatting Access Rules * The format for both /etc/hosts.allow and /etc /hosts.deny are identical. *

Page 282: Linux-Training-Volume1

Any blank lines or lines that start with a ha sh mark (#) are ignored, and each rule must be on its own line. * Each rule uses the following basic format to control access to network services: <daemon list>: <client list> [: <option>: <opti on>: ...] * A sample rule is given below which intsructs TCP wrappers to watch for connections to the FTP daemon (vsftpd) from any hos t in the example.com domain. * If this rule appears in hosts.allow, the conn ection will be accepted. If this rule appears in hosts.deny, the connection wil l be rejected. vsftpd : .example.com * Placing a period at the beginning of a hostna me, matches all hosts sharing the listed components of the name. * The next sample hosts access rule is more com plex and uses two option fields: sshd : .example.com \ : spawn /bin/echo `/bin/date` access denied>>/v ar/log/sshd.log \ : deny *

Page 283: Linux-Training-Volume1

This sample rule states that if a connection to the SSH daemon (sshd) is attempted from a host in the example.com domain, ex ecute the echo command (which will log the attempt to a special file), and deny t he connection. * Because the optional deny directive is used, this line will deny access even if it appears in the hosts.allow file. * Note that in this example that each option fi eld is preceded by the backslash (\). Use of the backslash prevents failur e of the rule due to length. * Placing a period at the end of an IP address matches all hosts sharing the initial numeric groups of an IP address. The follow ing example would apply to any host within the 192.168.x.x network: * ALL — Matches everything. It can be used fo r both the daemon list and the client list. ALL : 192.168. OR ALL : 192.168.0.0/255.255.254.0 * The following two rules allow SSH connections from client-1.example.com, but deny connections from client-2.example.com: sshd : client-1.example.com : allow sshd : client-2.example.com : deny 7.5.2. Xinetd

Page 284: Linux-Training-Volume1

* The xinetd daemon is a TCP wrapped super serv ice which controls access to a subset of popular network services including FTP, IMAP, and Telnet. * It also provides service-specific configurati on options for access control, enhanced logging, binding, redirection, an d resource utilization control. * When a client host attempts to connect to a n etwork service controlled by xinetd, the super service receives the request and checks for any TCP wrappers access control rules. * If access is allowed, xinetd verifies if the connection is allowed under its own access rules for that service and that the service is not consuming more than its alloted amount of resources or in breach o f any defined rules. * It then starts an instance of the requested s ervice and passes control of the connection to it. Once the connection is establ ished, xinetd does not interfere further with communication between the cl ient host and the server. The configuration files for xinetd are as follows: 1. /etc/xinetd.conf — The global x inetd configuration file. 2. /etc/xinetd.d/ directory — The directory containing all service-specific files. 7.5.2.1). /etc/xinetd.conf * The /etc/xinetd.conf contains general configu ration settings which effect every service under xinetd's control.

Page 285: Linux-Training-Volume1

* It is read once when the xinetd service is st arted, so in order for configuration changes to take effect, the administr ator must restart the xinetd service. Below is a sample /etc/xinetd.conf file: defaults { instances = 60 log_type = SYSLOG authpriv log_on_success = HOST PID log_on_failure = HOST cps = 25 30 } includedir /etc/xinetd.d These lines control various aspects of xinetd as be low: * instances — Sets the maximum number of requ ests xinetd can handle at once. * log_type — Configures xinetd to use the aut hpriv log facility, which writes log entries to the /var/log/secure file. Add ing a directive such as FILE /var/log/xinetdlog here would create a custom log f ile called xinetdlog in the /var/log/ directory. * log_on_success — Configures xinetd to log i f the connection is successful. By default, the remote host's IP addres s and the process ID of server processing the request are recorded. * log_on_failure — Configures xinetd to log i f there is a connection failure or if the connection is not allowed.

Page 286: Linux-Training-Volume1

* cps — Configures xinetd to allow no more th an 25 connections per second to any given service. If this limit is reached, the service is retired for 30 seconds. * includedir /etc/xinetd.d/ — Includes option s declared in the service-specific configuration files located in the /etc/xi netd.d/ directory. 7.5.2.2). The /etc/xinetd.d/ Directory * The files in the /etc/xinetd.d/ directory con tains the configuration files for each service managed by xinetd and the names of the files correlate to the service. * As with xinetd.conf, this file is read only w hen the xinetd service is started. In order for any changes to take effect, t he administrator must restart the xinetd service. * The format of files in the /etc/xinetd.d/ dir ectory use the same conventions as /etc/xinetd.conf. * The primary reason the configuration for each service is stored in separate file is to make customization easier and l ess likely to effect other services. * To get an idea of how these files are structu red, consider the /etc/xinetd.d/telnet file for the telnet service: service telnet { flags = REUSE socket_type = stream

Page 287: Linux-Training-Volume1

wait = no user = root server = /usr/sbin/in.telnetd log_on_failure += USERID disable = yes } These lines control various aspects of the telnet s ervice: * service — Defines the service name, usually to match a service listed in the /etc/services file. * flags — Sets any of a number of attributes for the connection. REUSE instructs xinetd to reuse the socket for a Telnet c onnection. * socket_type — Sets the network socket type to stream. * wait — Defines whether the service is singl e-threaded (yes) or multi-threaded (no). * user — Defines what user ID the process wil l run under. * server — Defines the binary executable to b e launched. * log_on_failure — Defines logging parameters for log_on_failure in addition to those already defined in xinetd.conf. *

Page 288: Linux-Training-Volume1

disable — Defines whether or not the servic e is active. 7.5.2.3). Access Control Options * Users of xinetd services can choose to use th e TCP wrappers hosts access rules, provide access control via the xinetd config uration files, or a mixture of both. * The xinetd hosts access control differs from the method used by TCP wrappers. While TCP wrappers places all of the acce ss configuration within two files, /etc/hosts.allow and /etc/hosts.deny, each s ervice's file in /etc/xinetd.d can contain its own access control ru les. * The following hosts access options are suppor ted by xinetd: o only_from — Allows only the specified hosts to use the service. o no_access — Blocks listed hosts from using the service. o access_times — Specifies the time ran ge when a particular service may be used. The time range must be stated in 24-ho ur format notation, HH:MM-HH:MM. * The only_from and no_access options can use a list of IP addresses or host names, or can specify an entire network. * Like TCP wrappers, combining xinetd access co ntrol with the enhanced logging configuration can enhance security by block ing requests from banned hosts while verbosely record each connection attemp t.

Page 289: Linux-Training-Volume1

* For example, the following /etc/xinetd.d/teln et file can be used to block telnet access from a particular network group and r estrict the overall time range that even allowed users can log in: service telnet { disable = no flags = REUSE socket_type = stream wait = no user = root server = /usr/sbin/in.telnetd log_on_failure += USERID no_access = 10.0.1.0/24 log_on_success += PID HOST EXIT access_times = 09:45-16:15 } * In this example, when client system from the 10.0.1.0/24 network, such as 10.0.1.2, tries accessing the Telnet service, it wi ll receive a message stating the following message: Connection closed by foreign host. * In addition, their login attempt is logged in /var/log/secure as follows: May 15 17:38:49 boo xinetd[16252]: START: telne t pid=16256 from=10.0.1.2 May 15 17:38:49 boo xinetd[16256]: FAIL: telnet address from=10.0.1.2

Page 290: Linux-Training-Volume1

May 15 17:38:49 boo xinetd[16252]: EXIT: telnet status=0 pid=16256 * When using TCP wrappers in conjunction with x inetd access controls, it is important to understand the relationship between th e two access control mechanisms. * The following is the order of operations foll owed by xinetd when client requests a connection: 1. The xinetd daemon accesses the TCP wrappers hosts access rules through a libwrap.a library call. If a deny r ule matches the client host, the connection is dropped. If an allow rule matches the client host, the connection is passed on to xinetd. 2. The xinetd daemon checks it s own access control rules both for the xinetd service and the requested servi ce. If a deny rule matches the client host the connection is dropped. Otherwis e, xinetd starts an instance of the requested service and passes control of the connection to it. 7.5.2.4). Logging Options The following logging options are available for bot h /etc/xinetd.conf and the service-specific configuration files in the /etc/xi netd.d/ directory. Below is a list of some of the more commonly used l ogging options: * ATTEMPT — Logs the fact that a failed attem pt was made (log_on_failure). * DURATION — Logs the length of time the serv ice is used by a remote system (log_on_success). *

Page 291: Linux-Training-Volume1

EXIT — Logs the exit status or termination signal of the service (log_on_success). * HOST — Logs the remote host's IP address (l og_on_failure and log_on_success). * PID — Logs the process ID of the server rec eiving the request (log_on_success). * RECORD — Records information about the remo te system in the case the service cannot be started. Only particular services , such as login and finger, may use this option (log_on_failure). * USERID — Logs the remote user using the met hod defined in RFC 1413 for all multi-threaded stream services (log_on_failure and log_on_success). 7.5.2.5). Binding and Redirection Options * The service configuration files for xinetd su pport binding the service to an IP address and redirecting incoming requests for that service to another IP address, hostname, or port. * Binding is controlled with the bind option in the service-specific configuration files and links the service to one IP address on the system. * Once configured, the bind option only allows requests for the proper IP address to access the service. This way different s ervices can be bound to different network interfaces based on need. * This is particularly useful for systems with multiple network adapters or with multiple IP addresses configured. On such a sy stem, insecure services, like

Page 292: Linux-Training-Volume1

Telnet, can be configured to listen only on the int erface connected to a private network and not to the interface connected with the Internet. * The redirect option accepts an IP address or hostname followed by a port number. * It configures the service to redirect any req uests for this service to the specified host and port number. * This feature can be used to o point to another port number on the sam e system. o redirect the request to different IP ad dress on the same machine. o shift the request to a totally differen t system and port number, or any combination of these options. o In this way, a user connecting to certa in service on a system may be rerouted to another system with no disruption. * The xinetd daemon is able to accomplish this redirection by spawning a process that stays alive for the duration of the co nnection between the requesting client machine and the host actually pro viding the service, transferring data between the two systems. * But the advantages of the bind and redirect o ptions are most clearly evident when they are used together. By binding a s ervice to a particular IP address on a system and then redirecting requests f or this service to a second machine that only the first machine can see, an int ernal system can be used to provide services for a totally different network. *

Page 293: Linux-Training-Volume1

For example, consider a system that is used a s a firewall with this setting for its Telnet service: * service telnet { socket_type = stream wait = no server = /usr/sbin/in.telnetd log_on_success += DURATION USERID log_on_failure += USERID bind = 123.123.123.123 redirect = 10.0.1.13 21 23 } * The bind and redirect options in this file en sures that the Telnet service on the machine is bound to the external IP address (123.123.123.123), the one facing the Internet. * In addition, any requests for Telnet service sent to 123.123.123.123 are redirected via a second network adapter to an inter nal IP address (10.0.1.13) that only the firewall and internal systems can acc ess. * The firewall then send the communication betw een the two systems, and the connecting system thinks it is connected to 123.123 .123.123 when it is actually connected to a different machine. * This feature is particularly useful for users with broadband connections and only one fixed IP address. *

Page 294: Linux-Training-Volume1

When using Network Address Translation (NAT), the systems behind the gateway machine, which are using internal-only IP a ddresses, are not available from outside the gateway system. * However, when certain services controlled by xinetd are configured with the bind and redirect options, the gateway machine can act as a type of proxy between outside systems and a particular internal m achine configured to provide the service. * In addition, the various xinetd access contro l and logging options are also available for additional protection, such as l imiting the number of simultaneous connections for the redirected service . 8. SHELL SCRIPTING A shell script is a series of commands written in p lain text files. Somes of its uses are: * Shell script can take input from user or file and output them on screen. * Useful to create your own commands. * Save lots of time. * To automate some task of day today life.For e g: To be set inside the cron daemon. * System Administration part can be also automa ted using shell scripts. 8.1. Shell Scripting Basics * The shell that is normally used is the Bash S hell . * After writing shell script , set execute perm ission for your script as follows $ chmod +x your-script-name

Page 295: Linux-Training-Volume1

$ chmod 755 your-script-name * Execute your script using any of the options below: $ bash your-script-name $ sh your-script-name $ ./your-script-name * Use any editor like vi to write shell script. * For shell script file try to give file extens ion such as .sh, which can be easily identified by you as a shell script. * A sample script is given below which will pri nt user information about who is currently logged in , current date & time etc. $ vi userinfo # # Script to print user information like who is currently logged in , current date & time # clear echo "Hello $USER" echo "Today is \c ";date echo "Number of user login : \c" ; who | wc -l echo "Calendar" cal exit 0 8.1.1. Variables in Shell

Page 296: Linux-Training-Volume1

In Linux (Shell) , there are two types of variables : 1. System variables - Created and maintained by Linux itself. This type of variable is defined in CAPITAL LETTERS. 2. User defined variables (UDV) - Created and ma intained by user. This type of variable is defined in lower case letters. * Some of the important System variables and th eir meanings are given below: System Variable Meaning BASH=/bin/bash Your shell name BASH_VERSION=1.14.7 Your shell version name COLUMNS=80 No. of columns for your screen HOME=/home/carma Your home directory LINES=25 No. of rows for your screen. LOGNAME=root LOGNAME contains the username you logg ed in with. OSTYPE=Linux The Os type PATH=/usr/bin:/sbin:/bin:/usr/sbin Your path settin gs PWD=/home/carma The current working directory SHELL=/bin/bash Your shell name USERNAME=carma User who is currently logged in to t his machine. * The above settings can be printed using the e cho command $ echo $USERNAME * echo will echo the string(s) specified to the standard output * $ is used to specify the shell variable.

Page 297: Linux-Training-Volume1

8.1.1.1). Defining User-defined variables * To define UDV , use following syntax: $ variable name=value * 'value' is assigned to given 'variable name' Example: * To assign variable no having value ‘10#’ $ no=10# * To define variable called 'vehicle' having va lue Bus $ vehicle=Bus * To define variable called n having value 10 $ n=10 * To print contents of variable 'vehicle' type $ echo $vehicle * To print contents of variable n $ echo $n 8.1.1.2). Rules for naming variables 1. Variable name must begin with Alphanumeric ch aracter or underscore character (_) , followed by one or more alphanumeri c character. For eg : HOME, System_Version. 2.

Page 298: Linux-Training-Volume1

Don't put spaces on either side of the equal sign when assigning value to variable. Eg : $ no=10 is fine. But there will be p roblems for any of the following variable declaration: $ no =10 $ no= 10 $ no = 10 3. Variables are case-sensitive, just like filen ame in Linux. 4. You can define NULL variable as follows (NULL variable is a variable which has no value at the time of definition) For e.g: $ vech= $ vech="" 5. Do not use ?,* etc, inside your variable names. 8.1.1.3). The “echo†� command * Use echo command to display text or value of a variable. * Some of the options which can be used with ec ho are given below. o ‘-n’ : Do not output the trailing n ew line. o ‘-e’ : Enable interpretation of the following backslash escape characters in the strings: \b backspace \c suppress trailing new line \n new line \r carriage return \t horizontal tab \\ backslash * Eg: $ echo -e "An apple a day keeps away \t\t doctor\n" 8.1.2. Shell arithmetic

Page 299: Linux-Training-Volume1

* Use to perform arithmetic operations. * Syntax: $ expr op1 math-operator op2 * Examples: $ expr 1 + 3 : Addition $ expr 2 – 1 : Subtraction $ expr 10 / 2 : Division $ expr 20 % 3 : Remainder $ expr 10 \* 3 : Multiplication $ echo `expr 6 + 3` : echo the results of an arithm etic expression Note: * expr 20 %3 – Remainder. Read as 20 mod 3 an d remainder is 2. * expr 10 \* 3 - Multiplication use \* and not * since its wild card. * For the last statement note the following poi nts o Firstly, before expr keyword we used ` (back quote) sign and not the (single quote i.e. ') sign. o Back quote is generally found on the ke y under tilde (~) on PC keyboard OR above TAB key. o Second, expr also ends with ` i.e. back quote. o Here expr 6 + 3 is evaluated to 9, and echo command prints 9 as sum o If you give echo “expr 6 + 3†� or echo ‘expr 6 + 3’ , it’ll print expr 6 + 3 o

Page 300: Linux-Training-Volume1

A sample script which performs an arith metic expression is given below $ vi arith.sh #!/bin/sh # Perform some arithmetic x=24 y=4 Result=`expr $x \* $y` echo "$x times $y is $Result" 8.1.3. Understanding Quotes inside the Shell Quotes Name Meaning “ Double Quotes "Double Quotes" - Anything enclosed in double quote s removes meaning of those characters (except \, ` and $). ‘ Single Quotes 'Single quotes' – Anything enclosed in single quo tes remains unchanged. ` Back Quote

Page 301: Linux-Training-Volume1

To execute command Eg: Some examples to understand the meaning of the quotes and their output is given below. $ echo "Today is `date`" Today is Thu Mar 10 15:13:49 IST 2005 $ echo "$USERNAME" root $ echo '$USERNAME' $ USERNAME 8.1.4. Finding the Exit Status of a Command Executi on By default in Linux if a particular command/shell s cript is executed, it will return two type of values which is used to see whet her the command or shell script executed is successful or not. * If return value is zero (0), command is succe ssful. * If return value is nonzero, command is not su ccessful or some sort of error is there executing command/shell script. * This value is know as Exit Status.And to dete rmine this exit Status you can use $? which is a special variable of shell. * For e.g: This example assumes that unknownfil e doest not exist on your hard drive $ rm unknownfile It will show error as follows rm: cannot remove `unknownfile': No such file or di rectory and after that if you give command $ echo $? it will print nonzero value to indicate error. Now give command $ ls

Page 302: Linux-Training-Volume1

$ echo $? It will print 0 to indicate that the command is suc cessful. 8.1.5. Reading input from the Standard Input * The read statement is used to get input (data from user) from the standard input and store the data in a variable. * Here’s a sample script which does this. $ vi sayhello.sh # #Script to read your name from key-board # echo "Your first name please:" read fname echo "Hello $fname, Lets be friend!" Run it as follows: $ chmod 755 sayhello.sh $ ./sayhello.sh 8.1.6. Command Line Arguments * When you run the command $ ls file file1 file 2 , ls is the command and file, file1, file2 are command line arguments passe d to it * Hence the command above has 3 command line ar guments. * $# holds number of arguments specified on com mand line. And $* or $@ refer to all arguments passed to script. o $* expands to a single variable contain ing all the command line parameters separated by spaces. o

Page 303: Linux-Training-Volume1

$@ expands to a list of separate words, each containing one of the command line parameters . o $# is the number of parameters, excludi ng $0 * Hence $1, $2, $3 refers to file, file1 and fi le2 * If you are running a shell script using the c ommandline below $ myshell.sh file1 dir1 * The shell script name myshell.sh is referred to as $0, file1 is $1 and dir1 $2 * These command line arguments to shell script are known as "positional parameters". 8.1.7. Structured Language Constructs 8.1.7.1). Decision Making Any type of comparison in Linux Shell gives only tw o answers, one is YES and other is NO. In Linux Shell Value Meaning Example Zero Value (0) Yes/True 0 NON-ZERO Value No/False -1, 32, 55 anything but not zero * test command or [ expr ] is used to see if an expression is true, and if it is true it returns zero(0), otherwise returns no nzero for false. * Syntax: test expression OR [ expression ] * For Mathematical comparisons, use following o perator in Shell Script Mathematical Operator in Shell Script

Page 304: Linux-Training-Volume1

Meaning Normal Arithmetical/ Mathematical Statements But in Shell For test statement with if command For [ expr ] statement with if command -eq Equal to 5==6 if test 5 -eq 6 if [ 5 -eq 6 ] -ne Not equal to 5!=6 if test 5 -ne 6 if [ 5 -ne 6 ] -lt Less than 5<6 if test 5 -lt 6 if [ 5 -lt 6 ] -le

Page 305: Linux-Training-Volume1

Less than or equal to 5<=6 if test 5 -le 6 if [ 5 -le 6 ] -gt Greater than 5>6 if test 5 -gt 6 if [ 5 -gt 6 ] -ge Greater than or equal to 5>=6 if test 5 -ge 6 if [ 5 –ge 6 ] * For string Comparisons use: Operator Returns True if string1 = string2 string1 is equal to string2

Page 306: Linux-Training-Volume1

string1 != string2 string1 is NOT equal to string2 string1 string1 is NOT NULL or not defined -n string1 string1 is NOT NULL and does exist -z string1 string1 is NULL and does exist(has length 0) * Shell also tests for files and directory type s : Operator Returns True if -s file File exists and is a non empty file -f file File exists and is a normal file and not a dire ctory -d dir File exists and is a directory -w file File is writeable.You’ve write permission on the file. -r file File is readable.

Page 307: Linux-Training-Volume1

-x file File is executable * Logical Operators which are used to combine t wo or more conditions at a time: Operator Meaning ! expression Logical NOT expression1 -a expression2 Logical AND expression1 -o expression2 Logical OR 8.1.7.2). Flow Control if...else...fi Syntax: if condition then condition is zero (true - 0) execute all commands up to else statement else

Page 308: Linux-Training-Volume1

if condition is not true then execute all commands up to fi fi Example: if test $1 -gt 0 then echo "$1 number is positive" else echo "$1 number is negative" fi Nested if-else-fi Syntax: if condition then if condition then ..... .. do this else .... .. do this fi else ... .....

Page 309: Linux-Training-Volume1

do this fi Multilevel if-then-else Syntax: if condition then condition is zero (true - 0) execute all commands up to elif statement elif condition1 then condition1 is zero (true - 0) execute all commands up to elif statement elif condition2 then condition2 is zero (true - 0) execute all commands up to elif statement else None of the above condtion,condtion1,condtion2 are true(i.e. all of the above are nonzero or false) execute all commands up to fi fi Example: # #!/bin/sh

Page 310: Linux-Training-Volume1

# Script to test if..elif...else if [ $1 -gt 0 ]; then echo "$1 is positive" elif [ $1 -lt 0 ] then echo "$1 is negative" elif [ $1 -eq 0 ] then echo "$1 is zero" else echo "Opps! $1 is not number, give number" fi 8.1.7.3). Loop Constructs FOR Loop Syntax: for { variable name } in { list } do execute one for each item in the list until the lis t is not finished (And repeat all statements between do and done) done Example: Before trying to understand above syntax, try the f ollowing script: $ cat testfor for i in 1 2 3 4 5 do echo "Welcome $i times"

Page 311: Linux-Training-Volume1

done * The for loop first creates i variable and ass igns a number to i from the list of numbers from 1 to 5, The shell will then ex ecute echo statement for each assignment of i. (This is usually known as iteratio n) * This process will continue until all the item s in the list are finished, and because of this it will repeat 5 echo statement s. Nesting of For Loop * To understand the nesting of for loop see the following shell script. $ vi nestedfor.sh for (( i = 1; i <= 5; i++ )) ### Outer for loop ### do for (( j = 1 ; j <= 5; j++ )) ### Inner for loop ## # do echo -n "$i " done echo "" #### print the new line ### done Run the above script as follows: $ chmod +x nestedfor.sh $ ./nestedfor.sh 1 1 1 1 1

Page 312: Linux-Training-Volume1

2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 * Here, for each value of i the inner loop is c ycled through 5 times, with the varible j taking values from 1 to 5 * The inner for loop terminates when the value of j exceeds 5, and the outer loop terminates when the value of i exceeds 5. The while Loop Syntax: while [ condition ] do command1 command2 command3 .. .... done * Loop is executed as long as given condition i s true. * The example below shows a shell script to sum the integers between 1 and 100: #!/bin/bash # Simple script to demonstrate while and arithm etic

Page 313: Linux-Training-Volume1

count=0 sum=0 while [ $count -lt 101 ] ; do sum=$(( $sum + $count )) count=$(( $count + 1 )) done echo "Sum = $sum" The Case Statement * The case statement is a good alternative to M ultilevel if-then-else-fi statement. It enable you to match several values ag ainst one variable. Its easier to read and write. Syntax: case $variable-name in pattern1) command ... .. command;; pattern2) command ... .. command;; patternN) command ... .. command;; *) command ... ..

Page 314: Linux-Training-Volume1

command;; esac * The $variable-name is compared against the pa tterns until a match is found. The shell then executes all the statements u p to the two semicolons that are next to each other. The default is *) and is ex ecuted if no match is found. For e.g. write script as follows: Example Syntax: rental=$1 case $rental in "car") echo "For $rental Rs.20 per k/m";; "van") echo "For $rental Rs.10 per k/m";; "jeep") echo "For $rental Rs.5 per k/m";; "bicycle") echo "For $rental 20 paisa per k/m";; *) echo "Sorry, I cannot get a $rental for you";; 8.1.7.4). Debugging a Shell script * Use the ‘-x’ or ‘-v’ option to show d ebug results * x shows the exact values of variables (or sta tements are shown on screen with values). * Use -v option to debug complex shell script. $ sh –x sample.sh $ sh –v sample.sh 8.2. Advanced Shell Scripting 8.2.1. /dev/null

Page 315: Linux-Training-Volume1

* /dev/null - Is used to send unwanted output o f a program * This is a special Linux file which is used to send any unwanted output from program/command. Syntax: command > /dev/null Example: $ ls > /dev/null Output of the above command is not shown on screen but its send to this special file. $ cat /dev/null > /var/log/messages This will empty the /var/log/messages file. 8.2.2. Conditional Execution using && and || * The control operators used for conditional ex ecution are && (read as AND) and || (read as OR). The syntax for AND list is as follows. Syntax for AND: command1 && command2 * command2 is executed if, and only if, command 1 returns an exit status of zero. Syntax for OR: command1 || command2 * command2 is executed if and only if command1 returns a non-zero exit status. You can use a combination of both as follows

Page 316: Linux-Training-Volume1

Syntax: command1 && command2 if exist status is zero || com mand3 if exit status is non-zero * if command1 is executed successfully then she ll will run command2 and if command1 is not successful then command3 is execute d. Example: $ rm myf && echo "File is removed successfully" || echo "File is not removed" 8.2.3. I/O Redirection and file descriptors Standard File File Descriptors number Use Example Stdin 0 as Standard input Keyboard Stdout 1 as Standard output Screen Stderr

Page 317: Linux-Training-Volume1

2 as Standard error Screen Eg: $ echo "There is an Error" 1>&2 * The 1>&2 at the end of echo statement, direct s the standard output (stdout) to standard error (stderr) device. 8.2.4. Essential Utilities 8.2.4.1). cut * Cut is used for selecting portions of a file. * Syntax: cut -f{field number or byte mumber} { file-name} * $ cut –f2 testfile ---- For printing the co ntents of second column * $ cut -f2,3 testfile ----For printing the con tents of second and 3rd column * For example : $ cat testfile Sr.No Name 11 Vivek 12 Renuka 13 Prakash 14 Ashish 15 Rani

Page 318: Linux-Training-Volume1

$ cut –f2 testfile Vivek Renuka Prakash Ashish Rani * $ cut –b2,3 test file -----will print the 2 nd and 3rd byte * $ cut -b1,2-10 testfile ----- will print the 1st byte, and 2nd to 10th byte. $ cut -b1,2 testfile Sr 11 12 13 14 15 $ cut -b10-30 test Name Vivek Renuka Prakash Ashish Rani The number can be specified in the various formats below:

Page 319: Linux-Training-Volume1

* N Nth byte, character or field, counted from 1 * N- from Nth byte, character or field, to end of line * N-M from Nth to Mth (included) byte, characte r or field * -M from first to Mth (included) byte, charact er or field 8.2.4.2). paste * Paste utility is useful to put textual inform ation together located in various files. * Syntax: paste {file1} {file2} * Example: $cat /file1 Vivek Renuka Prakash Ashish Rani $cat /file2 67 55 96 36 67 $ paste /file1 /file2 Vivek 67

Page 320: Linux-Training-Volume1

Renuka 55 Prakash 96 Ashish 36 Rani 67 * It will therefore read the contents of the fi le line by line and concatenate the first line, second line etc till th e nth line of both the files. 8.2.4.3). join * join utility joins, lines from separate files . * Syntax: join {file1} {file2} * $ join file1 file2 $ cat /file1 Sr.No Name 11 Vivek 12 Renuka 13 Prakash 14 Ashish 15 Rani $ cat /file2 Sr.No Mark 11 67 12 55 13 96 14 36

Page 321: Linux-Training-Volume1

15 67 $ join /file1 /file2 11 Vivek 67 12 Renuka 55 13 Prakash 96 14 Ashish 36 15 Rani 67 * join will only work, if there is common field in both file and if values are identical to each other. 8.2.4.4). tr * tr translate range of characters (i.e. small a to z) into other (i.e. to Capital A to Z) ranges. * General Syntax: tr {pattern-1} {pattern-2} * $ tr "[a-z]" "[A-Z]" * After executing command above type text in lo wer case and it’ll be converted to upper case. CTRL + C will terminate 8.2.4.5). uniq * Uniq is used to remove duplicate lines from a file * Syntax: uniq {file-name}

Page 322: Linux-Training-Volume1

* The uniq utility compares only adjacent lines , duplicate lines must be next to each other in the file * Otherwise you can sort the file and pass it t o uniq Eg: $ sort file1 | uniq 8.2.5. Awk Utility Awk utility is a powerful data manipulation/scripti ng programming language. * General Syntax of awk : awk -f {awk program f ile} filename * awk reads the input from given file (or from stdin also) one line at a time, then each line in the file is compared with t he pattern specified. * If pattern is matching for any line , then gi ven action is taken. Pattern can be regular expressions. 8.2.5.1). Understanding Awk Basic Examples The examples we’ll be seeing below is based on th e text file ‘testfile1’ , the contents of which is listed below: SrNo Product Qty Unit Price 1 Pen 5 20.00 2 Rubber 10 2.00 3 Pencil 3 3.50 4 Clock 2 45.50 Now give the following commandline $ awk '{ print $1 “.†� $2 "--> Rs." $3 * $4 }' testfile1 SrNo.Product--> Rs.0 1.Pen--> Rs.100 2.Pencil--> Rs.20

Page 323: Linux-Training-Volume1

3.Rubber--> Rs.10.5 4.Clock--> Rs.91 * The print command is used to print contents o f variables or text enclosed in " text ". * Here $1, $2, $3, $4 are all special variables containing values of fields or columns.Therefore $1 is the value of the first f ield for each of the lines in the file. * Finally we are directly doing the calculation using $3 * $4 i.e. multiplication of third and fourth field in the tex t file. * Note that "--> Rs." is a string which is prin ted as it is. $ awk '{ print $2 }’ testfile1 Product Pen Pencil Rubber Clock $ awk '{ print $0 }' testfile2 SrNo Product Qty Unit Price 1 Pen 5 20.00 2 Pencil 10 2.00 3 Rubber 3 3.50 4 Clock 2 45.50 * $0 is a special variable for awk , which refe rs to an entire record or the entire line.

Page 324: Linux-Training-Volume1

* The ‘-f’ option instructs awk, to read it s command from a given awk file. * Awk also uses some predefined variables like NR and NF which means Number of the input Record, Number of Fields in input reco rd respectively. * An example which uses both these options is g iven below: First, create an awk file below called def_var with the contents below. $ cat def_var { print "Printing Rec. #" NR "(" $0 "),And # of field s for this record is " NF } Then, run it as follows and the result is printed b elow. $awk -f def_var testfile1 $awk -f def_var testfile1 Printing Rec. #1(1 Pen 5 20.00),And # of fields for this record is 4 Printing Rec. #2(2 Pencil 10 2.00),And # of fields for this record is 4 Printing Rec. #3(3 Rubber 3 3.50),And # of fields f or this record is 4 Printing Rec. #4(4 Clock 2 45.50),And # of fields f or this record is 4 * Some of the other Awk predefined variables ar e : Awk Variable Meaning FILENAME Name of current input file RS

Page 325: Linux-Training-Volume1

Input record separator character (Default is new li ne) OFS Output field separator string (Blank is default) ORS Output record separator string (Default is new line ) NF Number of fields/columns in the input record NR Number of the input record (1 for 1st Record, 2 for 2nd record etc) OFMT Output format of number FS , F Field separator character (Blank & tab is default) 8.2.5.2). Doing arithmetic and user defined variabl es with awk You can easily do arithmetic with awk as follows $ vi math { print $1 " + " $2 " = " $1 + $2 print $1 " - " $2 " = " $1 - $2 print $1 " / " $2 " = " $1 / $2 print $1 " x " $2 " = " $1 * $2 print $1 " mod " $2 " = " $1 % $2 }

Page 326: Linux-Training-Volume1

$ awk -f math 20 3 20 + 3 = 23 20 - 3 = 17 20 / 3 = 6.66667 20 x 3 = 60 20 mod 3 = 2 (Press CTRL + D to terminate) You can also define your own variable in awk progra m, as follows: $ cat math1 { no1 = $1 no2 = $2 ans = $1 + $2 print no1 " + " no2 " = " ans } Run the program as follows $ awk -f math1 1 5 1 + 5 = 6 8.2.6. The sed Utility * SED is a stream editor. * A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). *

Page 327: Linux-Training-Volume1

SED works by making only one pass over the in put(s), and is consequently more efficient. * But it is SED's ability to filter text in a p ipeline which particularly distinguishes it from other types of editors. * General Syntax of sed $ sed -option 'general expression' [data-file] $ sed -option sed-script-file [data-file] * Sed means start the sed command. * The option that can be specified are given be low: Option Meaning Example -e Read the sed command from command line $ sed -e 'sed-commands' data-file-name -f Read the sed command from sed script file $sed -f sed-script-file data-file-name -n Suppress the output of sed command. When -n is used you must use p command of print flag. $ sed -n '/^\*..$/p' demofile2

Page 328: Linux-Training-Volume1

* The most basic and commonly used operators in the sed toolkit are printing (to stdout), deletion, and substitution. Their spec ifications are listed below. Operator Name Implication [address-range]/p print Print [specified address range] [address-range]/d delete Delete[specified address range] s/pattern1/pattern2/ substitute Substitute pattern2 for first instance of pattern1 in a line [address-range]/s/pattern1/pattern2/ substitute Substitute pattern2 for first instance of pattern1 in a line, over address-range [address-range]/y/pattern1/pattern2/ transform

Page 329: Linux-Training-Volume1

replace any character in pattern1 with the correspo nding character in pattern2, over address-range (equivalent of tr) g global Operate on every pattern match within each matched line of input * Examples of sed operators: Notation Meaning 8d Delete 8th line of input. /^$/d Delete all blank lines. 1,/^$/d Delete from beginning of input up to, and including first blank line. /Jones/p Print only lines containing "Jones" (with -n option ). s/Windows/Linux/ Substitute "Linux" for first instance of "Windows" found in each input line. s/BSOD/stability/g Substitute "stability" for every instance of "BSOD" found in each input line. s/ *$//

Page 330: Linux-Training-Volume1

Delete all spaces at the end of every line. s/00*/0/g Compress all consecutive sequences of zeroes into a single zero. /GUI/d Delete all lines containing "GUI". s/GUI//g Delete all instances of "GUI", leaving the remainde r of each line intact. 8.2.6.1). Sample sed Commands/Scripts * You can redirect the output of sed command to file as follows $ sed 's/Linux/UNIX/' file1 > file.out * Deleting blank lines from file. Using sed you can delete all blank lines from file * as follow $ sed '/^$/d' demofile1 * The following sed command takes input from wh o command and sed checks whether a particular user is logged in or not. Here -n option to sed command, will suppress the output of sed command; and /carma / is the pattern that we are looking for, so finally if the pattern is found its printed using p command of sed. $ who | sed -n '/carma/p' Sample Script1 To remove all blank lines and convert multiple spac es into single space, use the sed script ‘sedscript’ below.

Page 331: Linux-Training-Volume1

$ cat sedscript /^$/d s/ */ /g And run it on a demofile as below $ sed –f sedscript demofile * /^$/d : Will find all blank lines and delete is using d command. * s/ */ /g : Find two or more than two blank sp ace and replace it with single blank space Sample Script2 The command below will search for every instance of 1001 from the demofile $ sed -n '/10\{2\}1/p' demofile2 * {n,\} : At least n occurrences will be matche d. So /10\{2\} will look for 1 followed by 2 occurences of zero. * So /10\{2,\} : will look for atleast 2 occure nces of zero, * {n,\m} : Matches any number of occurrence bet ween n and m. Therefore, the command below $ sed -n '/10\{2,4\}1/p' demofile2 * Will match "1001", "10001", "100001" but not "101" or "10000000". As 0\{2,4\} will match 2 to 4 occurences of Zero.

Page 332: Linux-Training-Volume1

Sample Script3 The command below will match only lines which have only 3 astericks * $ sed -n '/^\*\*\*$/p' demofile2 * \* will search for * and \*\*\* will match 3 *’s and because its ^\*\*\*$, it’ll only match lines which have only three astericks. * /p will print the program or results * Same thing can be done using the commandline below also $ sed -n '/^\*\{3\}$/p' demofile2 9. INSTALLING LINUX SOFTWARE/KERNEL 9.1. RPM Installations * RPM is a widely used tool for delivering soft ware for Linux. Users can easily install an RPM-packaged product. * RPM (Red Hat Package Manager) is the most com mon software package manager used for Linux distributions. Because it allows you to distribute software already compiled, a user can install the software w ith a single command. 9.1.1. Getting the RPM source There are three commonly used sources for RPM’s. 1. Your Redhat/Fedora Installation CD( But they may not be updated to the latest version). 2. Download it from redhat.com site using the br owser or ftp program. RedHat site will have only their approved software on thei r sites. A good general purpose source for additional software can also be found at the url www.rpmfind.net. 3. The command wget can be used for downloading via the http or ftp protocol .The command line that can be used is:

Page 333: Linux-Training-Volume1

$ wget http://redhat.com/download/pub/fedora/linux/ core/i386/RPMS/ openssh-3.6.1p2-34.i386.rpm OR $ wget ftp://ftp.redhat.com/download/pub/fedora/lin ux/core/i386/RPMS/ openssh-3.6.1p2-34.i386.rpm 4. Using yum if it is installed. On Fedora syste ms, its installed by default. 9.1.2. Manually installing rpms * Download the RPMs (which usually have a file extension ending with .rpm) using any of the methods above into a temporary dir ectory, such as /opt. * The next step is to issue the rpm -ivh or – Uvh command to install the package. o The –i qualifier is to install an RPM package o The -U qualifier is used for updating a n RPM to the latest version o The -h qualifier gives a list of hash # characters during the installation and o The -v qualifier prints verbose status messages while the command is being executed. * Here is an example of a typical RPM installat ion command to install the MySQL server package: $ rpm -ivh mysql-server-3.23.58-9.i386.rpm Preparing... ####################### [100%] 1:mysql-server ####################### [100%]

Page 334: Linux-Training-Volume1

9.1.3. RPM Installation Errors * Sometimes the installation of RPM software do esn't go according to plan and you need to take corrective actions. This secti on shows you how to recover from some of the most common errors you'll encounte r. * Failed Dependencies : Sometimes RPM installat ions will fail giving “Failed dependencies errors†� which actually mean that a prerequisite RPM needs to be installed. * For example, in the example below, rpm instal lation of the MySQL database server application fails because the mysql client R PM, on which it depends, needs to be installed beforehand. $ rpm -ivh mysql-server-3.23.58-9.i386.rpm error: Failed dependencies: libmysqlclient.so.10 is needed by mysql-server-3.23 .58-9 * To get around this problem you can run the rp m command with the --nodeps option to disable dependency checks $ rpm –ivh –nodeps mysql-server-3.23.58-9.i386. rpm Preparing... ####################### [100%] 1:mysql ####################### [100%] * You may also use an option called --force, to forcefully do the rpm installation leaving out the dependency checks. $ rpm –ivh –nodeps –force mysql-server-3.23.5 8-9.i386.rpm * Rpm packaged files contains certain digests a nd signatures which ensure the integrity and origin of the package. Digital si gnatures cannot be verified without a public encryption key and this can be imp orted using the command line below $ rpm --import /usr/share/rhn/RPM-GPG-KEY

Page 335: Linux-Training-Volume1

9.1.4. Installing Source Rpms Sometimes the packages you want to install need to be compiled in order to match your kernel version. This requires you to use sourc e RPM files: * Download the source RPMs which usually have a file extension ending with (.src.rpm). * Run the following commands as root. Compiling and installing source RPMs can be done simply with the rpmbuild command. * rpmbuild is used to build both binary and sou rce software packages. * Packages come in two varieties: binary packag es, used to encapsulate software to be installed, and source packages, cont aining the source code and recipe necessary to produce binary packages. $ rpmbuild --rebuild filename.src.rpm * Here is an example in which we install the ta c_plus package. $ rpmbuild --rebuild tac_plus-4.0.3-2.src.rpm Installing tac_plus-4.0.3-2.src.rpm Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.6 1594 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD + rm -rf tac_plus-4.0.3 + /usr/bin/gzip -dc /usr/src/redhat/SOURCES/tac _plus-4.0.3.tgz + tar -xvvf - ... ... + umask 022 + cd /usr/src/redhat/BUILD + rm -rf tac_plus-4.0.3

Page 336: Linux-Training-Volume1

+ exit 0 * The compiled RPM file can now be found in one of the architecture subdirectories under /usr/src/redhat/RPMS directory . * For example, if you compiled an i386 architec ture version of the RPM it will be placed in the i386 subdirectory (/usr/src/r edhat/RPMS/i386). * You will then have to install the compiled RP Ms found in their respective subdirectories as you normally would. 9.1.5. Listing Installed RPMs * The rpm -qa command will list all the package s installed on your system. $ rpm –qa * You could use rpm -q package-name command to find an installed package if you know the package name. $ rpm –q openssh * If you are not sure of the package name, the command line that can be used is $ rpm –qa |grep ssh 9.1.6. Listing Files Associated with RPMs * Sometimes you'll find yourself installing sof tware that terminates with an error requesting the presence of a particular file. In many cases the installation program doesn't state the RPM package in which the file can be found. It is therefore important to be able to dete rmine the origin of certain files, by listing the contents for RPMs in which yo u suspect the files might reside. 9.1.6.1). Listing Files for Already Installed RPMs

Page 337: Linux-Training-Volume1

* You can use the -ql qualifier to list all the files associated with an installed RPM. * In this example we test to make sure that the openssh package is installed using the -q qualifier. $ rpm -q openssh * And then we use the -ql qualifier to get the file listing. # rpm -ql openssh /etc/ssh /etc/ssh/moduli /usr/bin/ssh-keygen /usr/libexec/openssh /usr/libexec/openssh/ssh-keysign /usr/share/doc/openssh-3.5p1 /usr/share/doc/openssh-3.5p1/CREDITS /usr/share/doc/openssh-3.5p1/ChangeLog /usr/share/doc/openssh-3.5p1/INSTALL /usr/share/doc/openssh-3.5p1/LICENCE /usr/share/doc/openssh-3.5p1/OVERVIEW /usr/share/doc/openssh-3.5p1/README /usr/share/man/man8/ssh-keysign.8.gz 9.1.6.2). Listing Files in RPM Files Suppose you have downloaded an rpm and you want to see all the files inside the RPM archive, you can do this using the -qpl qualifi er as below: $ rpm -qpl dhcp-3.0pl1-23.i386.rpm

Page 338: Linux-Training-Volume1

9.1.6.3). Listing the RPM to Which a File Belongs * You might need to know the RPM that was used to install a particular file. This is useful when you have a suspicion about the function of a file but are not entirely sure. * For example, the MySQL RPM uses the /etc/my.c nf file as its configuration file, not a file named /etc/mysql.conf as you'd nor mally expect. So you may check the rpm to which this particular files belong s using the command line below. $ rpm -qf /etc/my.cnf mysql-3.23.58-9 * Note that this will work only if the rpm pack age you are querying is already installed on the machine. 9.1.7. Uninstalling Rpms * The rpm -e command will erase an installed pa ckage. The package name given must match that listed in the rpm -qa command becau se the version of the package is important. * Example : $ rpm -e package-name $ rpm –e mysql-3.23.58-9 9.2. Software Installations from Source using Tarba lls * The tar file installation process usually req uires you first to uncompress and extract the contents of the archive in a local subdirectory, which frequently has the same name as the tar file. * The subdirectory will usually contain a file called README or INSTALL, which outlines all the customized steps to install the software.

Page 339: Linux-Training-Volume1

9.2.1. The GCC Compiler * The gcc C and C++ compiler is used to compile software on your system, most importantly the kernel. So in case they are no t present you need to install them or upgrade then. * The newest version of gcc is found on the Lin ux FTP sites. On sunsite.unc.edu, it is found in the directory /pub/ Linux/GCC (along with the libraries) eg: ftp://sunsite.unc.edu/pub/Linux/GCC. There should be a release file for the gcc distribution detailing what files you need to download and how to install them. 9.2.2. Steps for installing from Tarball 1. Download the tarball using wget to /opt or so me temporary directory. For example, $ wget http://prdownloads.sourceforge.net/gaim/gaim -1.1.3.tar.gz 2. The tarball format will be generally .tgz or tar.gz or sometimes .tbzip or tar.bz. The tar.gz file has to be uncompressed and unarchived using the command line below. $ tar –xvzf gaim-1.1.3.tar.gz * This would create a directory called ga im-1.1.3 within the current directory and unzip all the files within that new d irectory. 1. Once this is complete the installation instru ctions will ask you to execute the 3 basic commands : configure, make & ma ke install. First goto the gaim-1.1.3. directory to run the above commands. $ cd gaim-1.1.3 $ ./configure *

Page 340: Linux-Training-Volume1

The above command makes the shell run t he script named 'configure' which exists in the current directory. The configur e checks for lots of dependencies on your system. * If any of the major requirements are mi ssing on your system, the configure script would exit and you cannot proceed with the installation, until you get those required things. * The main job of the configure script is to create a ' Makefile ' . Depending on the results of the tests (checks) that the configure script performed ,it would write down the various steps th at need to be taken (while compiling the software) in the file named Makefile. 4. ‘make’ is actually a utility which exists on almost all Unix systems and it requires a file named Makefile in the same d irectory in which you run make. The make utility compiles all your program co de and creates the executables. It sets the sequence for the events us ing ‘labels’ so that your program does not complain about missing dependencie s.Hence step to do is ‘make’ under the directory game-1.1.3 $ make 5. One of the labels present in the Makefile hap pens to be named 'install' and when you run ‘make’ with install as the par ameter, the make utility searches for a label named install within the Makef ile, and executes only that section of the Makefile. The install section happens to be only a part where the executables and other required files created during the last step (i.e. m ake) are copied into the required final directories on your machine.( eg: /b in, /usr/bin or /usr/sbin) .Similarly all the other files are also copied to t he standard directories in Linux $ make install 9.3. Linux Kernel Recompilation Linux is a shining example of the power of the Open Source movement as a positive force of change in the software industry. * The Linux kernel, the core of any Linux distr ibution, is constantly evolving to incorporate new technologies and improv e performance, scalability, support, and usability.

Page 341: Linux-Training-Volume1

* Many of these enhancements are related to add ing support for additional architectures, processors, buses, interfaces, and d evices. * In addition to new features, each new stable Linux kernel version provides many improvements that standardize its internal int erfaces, extend the performance and size of supported devices, and simp lify adding support for new devices and subsystems to the kernel. * The kernel is the heart of the Linux operatin g system, managing all system threads, processes, resources, and resource allocat ion. * Unlike most other operating systems, Linux en ables users to reconfigure the kernel, which is usually done to reduce its siz e, add or deactivate support for specific devices or subsystems, or both. 9.3.1. Linux kernel – A Modular Kernel * Modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the k ernel without the need to reboot the system. * For example, one type of kernel module is the device driver, which allows the kernel to access hardware connected to the syst em. * Without modules, we would have to build monol ithic kernels and add new functionality directly into the kernel image. Besid es having larger kernels, this has the disadvantage of requiring us to rebuil d and reboot the kernel every time we want new functionality. * You can see the modules that are already load ed into the kernel using the lsmod command which gets its information by reading the file /proc/modules. $ lsmod 9.3.2. Recompiling the kernel 9.3.2.1) PreRequisites * You need to have the latest version of GCC in stalled before going ahead with the recompile. *

Page 342: Linux-Training-Volume1

You need to have enough disk space in the par tition which has the /usr/src/ directory. * Also for 2.6 kernel versions and above , it w ould be required to install the modutils and mod-init-tools package using the s teps below: To install modutils using an rpm installation Download the latest version of modutils rpm fro m http://www.kernel.org/pub/linux/kernel/people/r usty/modules/ To compile this source rpm – follow the steps below rpmbuild --rebuild modutils-2.4.22.src.rpm cd /usr/src/redhat/RPMS/i386 rpm –ivh modutils-2.4.22.rpm To install modutils using an rpm installation $ wget http://www.kernel.org/pub/linux/kernel/p eople/rusty/modules/module-init-tools-3.0.tar.gz $ tar -zxvf module-init-tools-3.0.tar.gz $ cd module-init-tools-3.0/ $ ./configure --prefix="" $ make $ make install $ ./generate-modprobe.conf /etc/modprobe.conf 9.3.2.2) Checking the current kernel and Redhat ver sion * The current running version of the kernel can be checked using the command “uname –a†� $ uname –a Linux educarma.com 2.4.20-8 #1 Thu Mar 13 17:18 :24 EST 2003 i686 athlon i386 GNU/Linux

Page 343: Linux-Training-Volume1

* The example above shows that the system is ru nning the kernel version 2.4.20-8 * The running version of Redhat can be checked using the command line below $ cat /etc/redhat-release 9.3.2.3). Kernel Recompilation Steps 1. Download the latest source from ftp://ftp.kernel.org/pub/linux/kernel/v2.6/linux-2. 6.10.tar.gz 2. Download the above source to the folder /usr/ src. 3. Uncompress the above source using the command line below and goto that folder. $ tar –xzvf linux-2.6.10.tar.gz $ cd /usr/src/linux-2.6.10 4. Do the step below to build the source tree. A nd “make mrproper" deletes your old .config file if you are rebuilding the ker nel. $ make mrproper 5. Configuration : Use one of the following tool s to create the .config file. This gives you the chance to choose what goes into the kernel. You can choose support for many of the latest capabilities, device drivers, and can tune the kernel for particular uses. Pick one of the followi ng and type the command line from the directory /usr/src/linux-2.6.10: * $ make config (Bash shell script) OR * $make menuconfig (uses text window curs es) OR *

Page 344: Linux-Training-Volume1

$make xconfig - recommended due to onli ne help feature and intuitive interface. Save configuration to file: .config OR * $ make oldconfig Build a configuration file based on defaults found in current .config file. On using menuconfig, menuconfig note the symbols be low * <*> = Compile into kernel * (y) as a Built-in module * <M> = Compile as a module (m) 6. Compile : Make sure the compiling is done fro m the directory /usr/src/linux-2.6.10. The steps to be followed are listed below * $ make dep ( make or build the dependencies f or your chosen configuration, deprecated for 2.6.10 kernel version and above). * Do a ‘make distclean’ to clean up junk fr om old compiles ( in case the same kernel is being compiled again). $ make distclean Or $ make clean * The next step is to create the new kernel and then move it to /boot/vmlinuz. bzImage is a very compressed kernel image. $ make bzImage 8. Compile the kernel modules $ make modules ====== To compile the kernel modules $ make modules_install ===== Generates the file

Page 345: Linux-Training-Volume1

/lib/modules/2.6.10/modules.dep. 9. Install : Follow the steps below to install the new kernel. $ ln –s /usr/src/linux-2.6.10 /usr/src/linux $ make install * Make install actually does the steps below, s o please make sure that its taken care off. $ mv /usr/src/linux/arch/i386/boot/bzImage /boot/vm linuz-2.6.10 $ ln -s /boot/vmlinuz-2.6.10 /boot/vmlinuz $ mv /usr/src/linux-2.6.0/System.map /boot/System.m ap $ ln –s /boot/System.map /boot/System.map-2.6.10 * System.map file : The kernel has symbols, jus t like the programs you write. The difference is, of course, that the kerne l is a very complicated piece of coding and has many, many global symbols. The ke rnel doesn't use symbol names. It's much happier knowing a variable or func tion name by the variable or function's address. The kernel is mainly written in C, so the compiler/linker allows us to use symbol names when we code and allo ws the kernel to use addresses when it runs. * There are situations, however, where we need to know the address of a symbol (or the symbol for an address). This is done by a symbol table. symbol table is a listing of all symbols along with their address. * Every time you compile a new kernel, the addr esses of various symbol names are bound to change. System.map is an actual file o n your filesystem. When you compile a new kernel, your old System.map has wrong symbol information. A new System.map is generated with each kernel compile an d you need to replace the old copy with your new copy. 10. Make the initrd image Execute the following command which creates the ini trd image file used to let the system boot.

Page 346: Linux-Training-Volume1

* For modular kernel, during booting this image loads hardware drivers which are not built into the kernel. * The purpose of the initial RAM disk is to all ow a modular kernel to have access to modules that it might need to boot from b efore the kernel has access to the device where the modules normally reside. * It uses an empty directory called /initrd and if this directory is found missing, the kernel will give a “kernel panic†� and fail to boot. $ mkinitrd /boot/initrd-2.6.10.img linux-2.6.10 The second argument is the name of the sub-director y of the modules under the directory /lib/modules/ 11. Configure the boot loader lilo : Lilo must po int to the new kernel. Edit /etc/lilo.conf and add a new image statement to poi nt to the new kernel. Keep the old as backup in case you need to boot using th at. A sample lilo.conf file Sample lilo.conf: boot=/dev/hda map=/boot/map - Locations on hard drive where t he kernel can be found for boot install=/boot/boot.b prompt timeout=50 linear - Specific to SCSI configurations default=linux image=/boot/vmlinuz-2.4.3 - Old kernel label=linux - Label/Name to be displayed by Lil o boot manager

Page 347: Linux-Training-Volume1

initrd=/boot/initrd-2.4-3.img read-only root=/dev/hda1 image=/boot/vmlinuz-2.6.10 --- New kernel label=linux-2.6.10 ---- Label for the new kerne l initrd=/boot/initrd-2.6.10.img read-only root=/dev/hda1 11. Install lilo : Run /sbin/lilo -v to configure the master boot record with data from lilo.conf $ /sbin/lilo –v The kernel recompilation is through and you are now ready to boot the machine on the new kernel. If you do not want to change the de fault boot image and boot the kernel on the new image only in the next boot, call lilo using $ lilo –R linux-2.6.10 13. Reboot the machine using the command line below . $ reboot 9.3.3. Command Line Tools for Kernel level administ ration 9.3.3.1). Kernel Modules Management * Modules are used to reduce the amount of memo ry used to hold the kernel. *

Page 348: Linux-Training-Volume1

There is a slight penalty for the time taken to load and unload the module. * If the code is required for general operation of the kernel or is needed often or required by the boot process, it is best t o compile it into the kernel and it should NOT be compiled as a module. Commands : * The kernel will use the modprobe utility (/sb in/modprobe) to determine if the module is compatible with the kernel. * The program used is specified by a proc file: $ cat /proc/sys/kernel/modprobe * Modules are loaded by init scripts which call insmod/rmmod to load/unload modules. List of modules are held in /etc/modules.c onf. Command Description lsmod List loaded modules insmod Inserts a module into the active kernel $ insmod usb-uhci rmmod Remove a loaded module. Just specify the module name. No ".o" or path necessary. $ rmmod usb-uhci modprobe

Page 349: Linux-Training-Volume1

High level handling of loadable modules. Loads module and dependancies. depmod Creates dependencies file for a module (used by modprobe) modinfo Display information about a kernel module Other useful commands: * List Processor type: $ cat /proc/cpuinfo * List devices: $ cat /proc/devices * List pci devices $ lspci * List usb devices $ lsusb * List IO ports (device address used by drivers ): $ cat /proc/ioports * List DMA channels: $ cat /proc/dma * View interrupts used by the system:

Page 350: Linux-Training-Volume1

$ cat /proc/interrupts * Display boot messages: $ dmesg * Display sound driver status: $ cat /dev/sndstat 9.4 . More About Lilo and Grub 9.4.1. Grub (Grand Unified Boot loader) * Briefly, a boot loader is the first software program that runs when a computer starts. It is responsible for loading and transferring control to an operating system kernel software. * Grub can load a wide variety of operating sys tems and other proprietary operating systems like windows using chain loading. * Chain loading is a method by which another bo ot loader is loaded to boot the unsupported operating system. Some of the advantages of Grub are 1. Recognize multiple executable formats 2. Support non-Multiboot kernels 3. Load multiple modules 4. Load a configuration file 5. Provide a menu interface 6. Have a flexible command-line interface 7. Support multiple filesystem types

Page 351: Linux-Training-Volume1

8. Support automatic decompression 9. Access data on any installed device 10. Be independent of drive geometry transl ations 11. Detect all installed RAM 12. Support Logical Block Address mode 13. Support network booting via ftp 14. Support remote terminals ( serial ) 9.4.1.1). Stages in Grub Loading GRUB loads itself into memory in the following stag es: * The Stage 1 or primary boot loader is read in to memory by the BIOS from the MBR. The primary boot loader exists on less tha n 512 bytes of disk space within the MBR and is capable of loading either the Stage 1.5 or Stage 2 boot loader. * The Stage 1.5 boot loader is read into memory by the Stage 1 boot loader, if necessary. Some hardware requires an intermediat e step to get to the Stage 2 boot loader. This is sometimes true when the /boot partition is above the 1024 cylinder head of the hard drive or when using LBA m ode. The Stage 1.5 boot loader is found either on the /boot partition or on a small part of the MBR and the /boot partition. * The Stage 2 or secondary boot loader is read into memory. The secondary boot loader displays the GRUB menu and command envi ronment. This interface allows you to select which operating system or Linu x kernel to boot, pass arguments to the kernel, or look at system paramete rs, such as available RAM. * The secondary boot loader reads the operating system or kernel and initrd into memory. Once GRUB determines which operating s ystem to start, it loads it into memory and transfers control of the machine to that operating system. 9.4.1.2). Direct Loading and Chain Loading Booting Methods

Page 352: Linux-Training-Volume1

* The boot method used to boot Red Hat Linux is called the direct loading method because the boot loader loads the operating system directly. There is no intermediary between the boot loader and the kernel . * The boot process used by other operating syst ems may differ. For example, Microsoft's DOS and Windows operating systems, as w ell as various other proprietary operating systems, are loaded using a c hain loading boot method. * Under this method, the MBR points to the firs t sector of the partition holding the operating system. There it finds the fi les necessary to actually boot that operating system. * GRUB supports both direct and chain-loading b oot methods, allowing it to boot almost any operating system. 9.4.1.3). Naming Conventions and Partitions used by Grub GRUB uses the following rules when naming devices a nd partitions: * It does not matter if system hard drives are IDE or SCSI. All hard drives start with hd. Floppy disks start with fd. * To specify an entire device without respect t o its partitions, leave off the comma and the partition number. This is importa nt when telling GRUB to configure the MBR for a particular disk. For exampl e, (hd0) specifies the MBR on the first device and (hd3) specifies the MBR on the fourth device. File Names and Blocklists * When typing commands to GRUB involving a file , such as a menu list to use when allowing the booting of multiple operating sys tems, it is necessary to include the file immediately after specifying the d evice and partition. *

Page 353: Linux-Training-Volume1

Most of the time, you will be specifying file s by their path on that partition plus the file's name. This is rather stra ightforward. An example is (hd0,0)/grub/grub.conf. * It is also possible to specify files to GRUB that do not actually appear in the file system, such as a chain loader that app ears in the first few blocks of a partition. * To specify these files, you must provide a bl ocklist, which tells GRUB, block by block, where the file is located in the pa rtition, since a file can be comprised of several different sets of blocks, ther e is a specific way to write blocklists. * Each file's section location is described by an offset number of blocks and then a number of blocks from that offset point, and the sections are put together in a comma-delimited order. The following is a sample blocklist: 0+50,100+25,200+1 This blocklist tells GRUB to use a file that starts at the first block on the partition and uses blocks 0 through 49, 99 through 124, and 199. * Knowing how to write blocklists is useful whe n using GRUB to load operating systems that use chain loading, such as M icrosoft Windows. * It is possible to leave off the offset number of blocks if starting at block 0. As an example, the chain loading file in t he first partition of the first hard drive would have the following name: (hd0,0)+1 * You can also use the chainloader command with a similar blocklist designation at the GRUB command line after setting the correct device and partition as root: chainloader +1 GRUB's Root File System * The GRUB root file system is the root partiti on for a particular device. GRUB uses this information to mount the device and load files from it.

Page 354: Linux-Training-Volume1

* With Red Hat Linux, once GRUB has loaded its root partition (which equates to the /boot partition and contains the Linux kerne l), the kernel command can be executed with the location of the kernel file as an option. Naming convention used by grub to identify devices * First of all grub requires the device names t o be enclosed with ( and ). For example, * GRUB uses its own unique partition numbering scheme; it starts from 0. * hd0,0 means the first partition of the first drive, or hda1. Both SCSI and IDE drives are represented by hd. GRUB numbers sequ entially, from zero: hda1 hd0,0 First partition of the first drive hda2 hd0,1 Second partition of the first drive hda3 hd0,2 Third partition of the first drive hda4 hd0,3 Fourth partition of the first drive * But that's not all. Remember, the standard Li nux partition table is like this: 1-4 primary partitions 5-up extended partitions In GRUB, it's like this: 0-3 primary partitions 4-up extended partitions * To specify a file on the first partition of t he first drive, use the command as, (hd0,0)/vmlinuz. This specifies the fil e named vmlinuz.

Page 355: Linux-Training-Volume1

9.4.1.4). Installing and Booting Grub How to install Grub * First install the grub system and utilities f rom the tar ball or the package available for your system. On redhat linux it is, grub-0.94-5. * Install the boot loader. This could be done u sing the grub binary named as grub-install. $ grub-install /dev/hda OR $ grub-install /dev/hd0 * This will install grub on the MBR of the firs t hard disk. * If you have a separate boot partition then gr ub should be installed as, $ grub-install --root-directory=/boot /dev/hda How to boot operating systems GRUB has two distinct boot methods. * One of the two is to load an operating system directly, and * the other is to chain-load another boot loade r which then will load an actual operating system. * However, the latter is sometimes required, si nce GRUB doesn't support all the existing operating systems natively. GRUB image files GRUB consists of several images: two essential stag es, optional stages called Stage 1.5.

Page 356: Linux-Training-Volume1

* Stage1 Image This is an essential image used for booting up GRUB . Usually, this is embedded in an MBR or the boot sector of a partition. o Because a PC boot sector is 512 bytes, the size of this image is exactly 512 bytes. o All stage1 must do is to load Stage 2 o r Stage 1.5 from a local disk. o Because of the size restriction, stage1 encodes the location of Stage 2 (or Stage 1.5) in a block list format, so i t never understand any filesystem structure. * Stage2 Image This is the core image of GRUB. It does everything but booting up itself. 9.4.1.5). GRUB Interfaces GRUB features three interfaces, which provide diffe rent levels of functionality. Each of these interfaces allows users to boot opera ting systems, and move between interfaces within the GRUB environment. 1. Menu Interface * If GRUB was automatically configured by the R ed Hat Linux installation program, this is the interface shown by default. * A menu of operating systems or kernels precon figured with their own boot commands are displayed as a list, ordered by name. * Use the arrow keys to select an option other than the default selection and press the [Enter] key to boot it. Alternatively , a timeout period is set, so that GRUB will start loading the default option. * From the menu interface, press the [e] key to enter the entry editor interface or the [c] key to load a command line int erface.

Page 357: Linux-Training-Volume1

2. Menu Entry Editor Interface * To access the menu entry editor, press the [e ] key from the boot loader menu interface. * The GRUB commands for that entry are displaye d here, and users may alter these command lines before booting the operating sy stem by adding a command line such as below. * [o] inserts the new line after the current li ne and [O] before it, editing one ([e]), or deleting one ([d]). * After all changes are made, hit the [b] key t o execute the commands and boot the operating system. * The [Esc] key discards any changes and reload s the standard menu interface. * The [c] key will load the command line interf ace. * This method can be used to boot linux in “s ingle user†� mode. 3. Command Line Interface * The command line is the most basic GRUB inter face, but it is also the one that grants the most control. * The command line makes it possible to type an y relevant GRUB commands followed by the [Enter] key to execute them. * This interface features some advanced shell-l ike features, including [Tab] key completion, based on context, and [Ctrl] key co mbinations when typing commands, such as [Ctrl]-[a] to move to the beginni ng of a line, and [Ctrl]-[e] to move to the end of a line. *

Page 358: Linux-Training-Volume1

In addition, the arrow, [Home], [End], and [D elete] keys work as they do in the bash shell. * The grub commandline can be accessed from a n ormal bash shell on linux systems where grub is installed using the command â €œgrub†� $ grub Order of Interface Use * When the GRUB environment loads the second st age boot loader, it looks for its configuration file. * When found, it uses the configuration file to build the menu list and displays the boot menu interface. * If the configuration file cannot be found, or if the configuration file is unreadable, GRUB will load the command line interfa ce to allow users to manually type the commands necessary to boot an operating sy stem. * If the configuration file is not valid, GRUB will print out the error and ask for input. This can be very helpful, because us ers will then be able to see precisely where the problem occurred and fix it in the file. * Pressing any key will reload the menu interfa ce, where it is then possible to edit the menu option and correct the problem bas ed on the error reported by GRUB. If the correction fails, the error is reporte d and GRUB will begin again. 9.4.1.6). GRUB Commands * GRUB allows a number of useful commands in it s command line interface. * Some of the commands accept options after the ir name; these options should be separated from the command and other options on that line by space characters. The following is a list of useful commands: * boot — Boots the operating system or chain loader that has been previously specified and loaded.

Page 359: Linux-Training-Volume1

* chainloader <file-name> — Loads the specifi ed file as a chain loader. To grab the file at the first sector of the specified partition, use +1 as the file's name. +1' indicates that GRUB should read on e sector from the start of the partition * displaymem — Displays the current use of me mory, based on information from the BIOS. This is useful to determine how much RAM a system has prior to booting it. * initrd <file-name> — Enables users to speci fy an initial RAM disk to use when booting. An initrd is necessary when the kerne l needs certain modules in order to boot properly, such as when the root parti tion is formated with the ext3 file system. * install <stage-1> <install-disk> <stage-2> p <config-file> — Installs GRUB to the system MBR. When using the install comm and the user must specify the following: o <stage-1> — Signifies a device, parti tion, and file where the first boot loader image can be found, such as (hd0, 0)/grub/stage1. o <install-disk> — Specifies the disk w here the stage 1 boot loader should be installed, such as (hd0). o <stage-2> — Passes to the stage 1 boo t loader the location of where the stage 2 boot loader is located, such as ( hd0,0)/grub/stage2. o p <config-file> — This option tells t he install command to look for the menu configuration file specified by <confi g-file>. An example of a valid path to the configuration file is (hd0,0)/gru b/grub.conf. Eg: install /grub/stage1 (hd0) /grub/stage2 p /grub /grub.conf * kernel <kernel-file-name> <option-1> <option- N> — Specifies the kernel file to load from GRUB's root file system when usin g direct loading to boot the operating system. Options can follow the kernel com mand and will be passed to the kernel when it is loaded. For Red Hat Linux, an example kernel command looks like the following:

Page 360: Linux-Training-Volume1

kernel /vmlinuz root=/dev/hda1 Example to load a Linux kernel from grub command li ne: grub> kernel (hd0,1)/boot/vmlinuz root=/dev/hda 2 grub> boot * This line specifies that the vmlinuz file is loaded from GRUB's root file system, such as (hd0,0). * An option is also passed to the kernel specif ying that when loading the root file system for the Linux kernel, it should be on hda5, the fifth partition on the first IDE hard drive. Multiple options may b e placed after this option, if needed. * root <device-and-partition> — Configures GR UB's root partition to be a specific device and partition, such as (hd0,0), and mounts the partition so that files can be read. * rootnoverify <device-and-partition> — Perfo rms the same functions as the root command but does not mount the partition. 9.4.1.7). GRUB Menu Configuration File * The configuration file (/boot/grub/grub.conf) , which is used to create the list of operating systems to boot in GRUB's menu in terface, essentially allows the user to select a pre-set group of commands to e xecute. Special Configuration File Commands The following commands can only be used in the GRUB menu configuration file: * color <normal-color> <selected-color> — All ows specific colors to be used in the menu, where two colors are configured a s the foreground and background. Use simple color names, such as red/bla ck. * default <title-name> — The default entry ti tle name that will be loaded if the menu interface times out.

Page 361: Linux-Training-Volume1

* fallback <title-name> — If used, the entry title name to try if first attempt fails. * hiddenmenu — If used, prevents the GRUB men u interface from being displayed, loading the default entry when the timeo ut period expires. The user can see the standard GRUB menu by pressing the key. * password <password> — If used, prevents a u ser who does not know the password from editing the entries for this menu opt ion. * timeout — If used, sets the interval, in se conds, before GRUB loads the entry designated by the default command. * splashimage — Specifies the location of the splash screen image to be used when GRUB boots. * title — Sets a title to be used with a part icular group of commands used to load an operating system. * The hash mark (#) character can be used at th e beginning of a line to place comments in the menu configuration file. Configuration File Structure * The GRUB menu interface configuration file is /boot/grub/grub.conf. * The commands to set the global preferences fo r the menu interface are placed at the top of the file, followed by the diff erent entries for each of the operating systems or kernels listed in the menu. * The following is a very basic GRUB menu confi guration file designed to boot either Red Hat Linux and Microsoft Windows 200 0: default=0 fallback=1 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz

Page 362: Linux-Training-Volume1

# section to load linux title Red Hat Linux (2.4.20) root (hd0,0) kernel /vmlinuz-2.4.20 ro root=/dev/hda2 initrd /initrd-2.4.20.img # section to load Windows 2000 title windows rootnoverify (hd0,0) chainloader +1 * This file tells GRUB to build a menu with Red Hat Linux as the default operating system and sets it to autoboot after 10 s econds. * Two sections are given, one for each operatin g system entry, with commands specific to the system disk partition table. * Note that the default is specified as a numbe r. This refers to the first title line GRUB comes across. * If you want windows to be the default, change the default=0 to default=1. * chainloader +1 boots the windows partition fr om the first sector of the first hard drive. 9.4.1.8). Changing Runlevels at Boot Time If you are using GRUB as your boot loader, follow t hese steps: * In the graphical GRUB boot loader screen, sel ect the Red Hat Linux boot label and press [e] to edit it. *

Page 363: Linux-Training-Volume1

Arrow down to the kernel line and press [e] t o edit it. * At the prompt, type the number of the runleve l you wish to boot into (1 through 5), or the word single and press [Enter]. * You will be returned to the GRUB screen with the kernel information. Press the [b] key to boot the system. 9.4.2. LILO or Linux Loader * LILO stands for Linux Loader. * The Linux Loader or LILO is one of the most p opular methods of booting into Linux.It is the Linux boot manager that is eit her written to the Master Boot Record of your hard drive or to the first sect or of your hard drive. * It also allows you to choose which operating system to load if you have multiple operating systems on your machine. It also allows you to boot different Linux kernel versions if you want. * Because of this, its a very flexible boot loa der. LILO vs. GRUB In general, LILO works similarly to GRUB except for three major differences: 1. It has no interactive command interface and only allows one command with arguments. 2. It stores information about the locatio n of the kernel or other operating system it is to load on the MBR. 3. It cannot read ext2 partitions. *

Page 364: Linux-Training-Volume1

The last two points mean that if you change L ILO's configuration file or install a new kernel, you must rewrite the Stage 1 LILO boot loader to the MBR by issuing the /sbin/lilo -v command. * This is more risky than GRUB's method, becaus e a misconfigured MBR leaves the system unbootable. With GRUB, if the configurat ion file is erroneously configured, it will simply default to its command l ine interface. 9.4.2.1). LILO Booting stages * LILO loads itself into memory almost identica lly to GRUB, except it is only a two stage loader. 1. The Stage 1 or primary boot loader is read into memory by the BIOS from the MBR. The primary b oot loader exists on less than 512 bytes of disk space within the MBR. The on ly thing it does is load the Stage 2 boot loader and pass to it disk geometry in formation. 2. The Stage 2 or secondary boot loader is read into memory. The secondary boot loader displays the Red Hat Linux initial screen. This screen allows you to select which oper ating system or Linux kernel to boot. 3. The Stage 2 boo t loader reads the operating system or kernel and initrd into memory. Once LILO determines which operating system to start, it loads it into memory and hands control of the machine to that operating system. 9.4.2.2) Lilo Configuration File * Default Configuration : During the installati on of Linux, you are given the option to install LILO as your boot manager. If you choose to install it, the LILO configuration file is usually in the /etc/ lilo.conf (the default for RedHat). * A typical configuration file will look like t he following: boot=/dev/hda map=/boot/map

Page 365: Linux-Training-Volume1

install=/boot/boot.b prompt timeout=50 message=/boot/message lba32 default=linux append="hdc=ide-scsi" image=/boot/vmlinuz-2.2.5-15 label=linux root=/dev/hda3 initrd=/boot/initrd-2.2.5-15.img read-only other=/dev/hda1 label=dos The following is a more detailed look at the lines of this file: * boot= /dev/hda — Instructs LILO to be insta lled on the first hard disk of the first IDE controller. * map=/boot/map — Locates the map file. In no rmal use, this should not be modified. * install=/boot/boot.b — Instructs LILO to in stall the specified file as the new boot sector. In normal use, this should not be altered. If the install line is missing, LILO assumes a default of /boot/bo ot.b as the file to be used. * prompt — Instructs LILO to show you whateve r is referenced in the message line. While it is not recommended that you remove the prompt line, if you do remove it, you can still access a prompt by holding down the [Shift] key while your machine starts to boot. * timeout=50 — Sets the amount of time that L ILO waits for user input before proceeding with booting the default line ent ry. This is measured in tenths of a second, with 50 as the default.

Page 366: Linux-Training-Volume1

* message=/boot/message — Refers to the scree n that LILO displays to let you select the operating system or kernel to boot. * lba32 — Describes the hard disk geometry to LILO. Another common entry here is linear. You should not change this line unl ess you are very aware of what you are doing. Otherwise, you could put your s ystem in an unbootable state. * default=linux — Refers to the default opera ting system for LILO to boot as seen in the options listed below this line. The name linux refers to the label line below in each of the boot options. * image=/boot/vmlinuz-2.2.5-15— Specifies whi ch Linux kernel to boot with this particular boot option. * label=linux — Names the operating system op tion in the LILO screen. In this case, it is also the name referred to by the d efault line. * initrd=/boot/initrd-2.2.5-15.img — Refers t o the initial ram disk image that is used at boot time to initialize and start t he devices that makes booting the kernel possible. The initial ram disk is a coll ection of machine-specific drivers necessary to operate a SCSI card, hard driv e, or any other device needed to load the kernel. You should never try to share i nitial ram disks between machines. * read-only — Specifies that the root partiti on (refer to the root line below) is read-only and cannot be altered during th e boot process. * root=/dev/hda3 — Specifies which disk parti tion to use as the root partition. * other=/dev/hda1 — Specifies the partition c ontaining DOS. * append="hdc=ide-scsi" allows you to pass para meters to the kernel at boot without any intervention from you. This can be a gl obal setting or a per-image setting. Just enter the parameters that are to be p assed to the kernel within double-quotes. The advantage is that you don't have to pass the parameters to Linux at every boot. Here using the append statemen t, you can tell Linux to use the ide-scsi module for /dev/hdc. 9.4.2.3). Installing lilo *

Page 367: Linux-Training-Volume1

After editing the configuration file to inclu de additional operating systems or additional kernels, the lilo command mus t be run for your changes to take effect. $ /sbin/lilo OR $ lilo * To add multiple kernel images, they can be ap pended to the /etc/lilo.conf file * To get a more verbose description of the labe ls thta have been added by lilo, use the option $ lilo –v * To instruct lilo to use a specific kernel onl y on the next reboot without changing the default in the /etc/lilo.conf file, us e the option $ lilo –R <labelname> 9.4.2.4). Changing Runlevel at Boot Time * If you use LILO as your boot loader, access t he boot: prompt by typing [Ctrl]-[X]. Then type: linux <number> Eg: linux single linux 5 * Replace number with either the number of the runlevel you wish to boot into (1 through 5), or the word single to boot into single user mode. 10. LINUX SERVICES 10.1. Open SSH Server The openssh is a free, open source implementation o f the SSH (Secure shell) protocols.

Page 368: Linux-Training-Volume1

* It replaces telnet, ftp, rlogin, rsh, and rcp with secure, encrypted network connectivity tools. * OpenSSH supports versions 1.3, 1.5, and 2 of the SSH protocol. Since OpenSSH version 2.9, the default protocol is versio n 2, which uses RSA keys as the default. * Another reason to use OpenSSH is that it auto matically forwards the DISPLAY variable to the client machine. In other wo rds, if you are running the X Window System on your local machine, and you log in to a remote machine using the ssh command, when you execute a program on the remote machine that requires X, it will be displayed on your local machine. * This is convenient if you prefer graphical sy stem administration tools but do not always have physical access to your server. 10.1.1. Configuring an OpenSSH server * To run an OpenSSH server you require two pack ages, openssh-server package openssh package * You can use ‘rpm –qa’ to see the versio n of openssh-server and openssh package installed on the server $ rpm –qa openssh $ rpm –qa openssh-server * The configuration file used by ssh is /etc/ss h/sshd_config. * The port on which sshd listens to by default is port 22 * To start the service use the command $ /sbin/service sshd start *

Page 369: Linux-Training-Volume1

To stop the server use the command $ /sbin/service sshd stop 10.1.2. Configuring an OpenSSH Client * ssh (SSH client) is a program for logging int o a remote machine and for executing commands on a remote machine. * It is intended to replace rlogin and rsh, and provide secure encrypted communications between two untrusted hosts over an insecure network. * The packages required for an OpenSSH client a re o openssh-clients o openssh 10.1.2.1). Using the SSH command * Logging in to a remote machine with ssh is si milar to using telnet. * To log in to a remote machine named penguin.e xample.net, type the following command at a shell prompt: $ ssh [-l login_name] hostname OR user@hostname [co mmand] * If you don’t use –l command the login nam e will be the username initiating the connection. ie If you are logged in as x then the username will be x unless specified by the –l <username> option . * Use the option –p to specify a port at the remote machine if sshd is not running at the standard port 22. * To disable X11 forwarding use the option –x . $ ssh penguin.example.net

Page 370: Linux-Training-Volume1

* The first time you ssh to a remote machine, y ou will see a message similar to the following. ########################## The authenticity of host 'penguin.example.net' can't be established. DSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b: 01:5d:b3:07:38:e2:11:0c. Are you sure you want to continue connecting (y es/no)? Type yes to continue. This will add the server to your list of known hosts as seen in the following message: Warning: Permanently added 'penguin.example.net ' (DSA) to the list of known hosts. ########################## * Next, you'll see a prompt asking for your pas sword for the remote machine. After entering your password, you will be at a shel l prompt for the remote machine. * If you use ssh without any command line optio ns, the username that you are logged in as on the local client machine is passed to the remote machine. * The ssh command can be used to execute a comm and on the remote machine without logging in to a shell prompt. * For example, if you want to execute the comma nd ‘ls /usr/share/doc’ on the remote machine penguin.example.net, type the fo llowing command at a shell prompt: $ ssh penguin.example.net ls /usr/share/doc After you enter the correct password, the contents of /usr/share/doc will be displayed, and you will return to your shell prompt . 10.1.2.2). Using the scp Command *

Page 371: Linux-Training-Volume1

The scp command can be used to transfer files between machines over a secure, encrypted connection. It is similar to rcp. * The general syntax to transfer a local file t o a remote system is $ scp localfile username@tohostname:/newfilename. * The localfile specifies the source, and the g roup username@tohostname:/newfilename specifies the dest ination. * To transfer the local file /root/testfile to your account on penguin.educarma.com, type the following at a shell prompt (replace username with your username): $ scp /root/testfile [email protected]. com:/home/username * This will transfer the local file /root/testf ile to /home/username/testfile on penguin.educarma.com. * The general syntax to transfer a remote file to the local system is $ scp username@tohostname:/remotefile /newlocalfile * The remotefile specifies the path of the file on the remote machine, and newlocalfile specifies the destination. * Multiple files can be specified as the source files. For example, to transfer the contents of the directory /downloads t o an existing directory called uploads on the remote machine penguin.educar ma.com, type the following at a shell prompt: $ scp /downloads/* [email protected]:/u ploads/ 10.1.2.3). Using the sftp Command * The sftp utility can be used to open a secure , interactive FTP session. *

Page 372: Linux-Training-Volume1

It is similar to ftp except that it uses a se c+ure, encrypted connection. * The general syntax is sftp username@hostname. com. * Once authenticated, you can use a set of comm ands similar to using FTP. * The sftp utility is only available in OpenSSH version 2.5.0p1 and higher. 10.1.2.4). Generating Key Pairs * If you do not want to enter your password eve ry time you use ssh, scp, or sftp to connect to a remote machine, you can genera te an authorization key pair. * Keys must be generated for each user. To gene rate keys for a user, follow the following steps as the user who wants to connec t to remote machines. * If you complete the following steps as root, only root will be able to use the keys. Generating a DSA Key Pair for Version 2 Use the following steps to generate a DSA key pair for version 2 of the SSH Protocol. * To generate a DSA key pair to work with versi on 2 of the protocol, type the following command at a shell prompt: $ ssh-keygen -t dsa * Accept the default file location of ~/.ssh/id _dsa. Enter a passphrase different from your account password and confirm it by entering it again. *

Page 373: Linux-Training-Volume1

The public key is written to ~/.ssh/id_dsa.pu b. The private key is written to ~/.ssh/id_dsa. It is important never to give any one the private key. * Change the permissions of your .ssh directory using the command $ chmod 755 ~/.ssh * Copy the contents of ~/.ssh/id_dsa.pub to ~/. ssh/authorized_keys on the machine to which you want to connect. * If the file ~/.ssh/authorized_keys does not e xist, you can copy the file ~/.ssh/id_dsa.pub to the file ~/.ssh/authorized_key s on the other machine Generating an RSA Key Pair for Version 2 Use the following steps to generate a RSA key pair for version 2 of the SSH protocol. This is the default starting with OpenSSH 2.9. * To generate a RSA key pair to work with versi on 2 of the protocol, type the following command at a shell prompt: $ ssh-keygen -t rsa * Accept the default file location of ~/.ssh/id _rsa. Enter a passphrase different from your account password and confirm it by entering it again. * The public key is written to ~/.ssh/id_rsa.pu b. The private key is written to ~/.ssh/id_rsa. Never distribute your private key to anyone. * Change the permissions of your .ssh directory using the command $ chmod 755 ~/.ssh * Copy the contents of ~/.ssh/id_rsa.pub to ~/. ssh/authorized_keys on the machine to which you want to connect. If the file ~ /.ssh/authorized_keys does not exist, you can copy the file ~/.ssh/id_rsa.pub to the file ~/.ssh/authorized_keys on the other machine.

Page 374: Linux-Training-Volume1

Generating an RSA Key Pair for Version 1.3 and 1.5 Use the following steps to generate an RSA key pair , which is used by version 1 of the SSH Protocol. If you are only connecting bet ween Red Hat Linux 7.3 systems, you do not need an RSA version 1.3 or RSA version 1.5 key pair. * To generate an RSA (for version 1.3 and 1.5 p rotocol) key pair, type the following command at a shell prompt: $ ssh-keygen -t rsa1 * Accept the default file location (~/.ssh/iden tity). Enter a passphrase different from your account password. Confirm the p assphrase by entering it again. * The public key is written to ~/.ssh/identity. pub. The private key is written to ~/.ssh/identity. Do not give anyone the private key. * Change the permissions of your .ssh directory and your key with the commands below: $ chmod 755 ~/.ssh $ chmod 644 ~/.ssh/identity.pub * Copy the contents of ~/.ssh/identity.pub to t he file ~/.ssh/authorized_keys on the machine to which you wish to connect. * If the file ~/.ssh/authorized_keys does not e xist, you can copy the file ~/.ssh/identity.pub to the file ~/.ssh/authorized_k eys on the remote machine. 10.2. Berkeley Internet Name Domain (BIND) Server * In modern networks users identify other compu ters by their name. *

Page 375: Linux-Training-Volume1

The most effective way to achieve this is by means of DNS ( Domain Name Service) or Nameserver, which resolves hostnames on the network to numerical addresses and vice versa. * DNS is usually implemented using centralized nameservers, which are authoritative to some machines, which belongs to th e network the Nameserver is implemented and forward the queries to other DNS se rvers for other domains. * When a client host requests information from a nameserver, it usually connects to port 53. * The nameserver then attempts to resolve the F QDN based on its resolver library, which may contain authoritative informatio n about the host requested or cached data from an earlier query. * If the nameserver does not already have the a nswer in its resolver library, it queries other nameservers, called root nameservers, to determine which nameservers are authoritative for the FQDN in question. * Then, with that information, it queries the a uthoritative nameservers to determine the IP address of the requested host. If performing a reverse lookup, the same procedure is used, except the query is mad e with an unknown IP address rather than a name. 10.2.1. Nameserver Zones * On the Internet, the FQDN ( Fully Qualified D omain name) of a host can be broken down into different sections. * These sections are organized into a hierarchy much like a tree, with a main trunk, primary branches, secondary branches, a nd so forth. * Consider the following FQDN: bob.sales.exampl e.com * When looking at how a FQDN is resolved to fin d the IP address that relates to a particular system, read the name from right to left, with each level of the hierarchy divided by periods (.). * In this example, com defines the top level do main for this FQDN. The name ‘example’ is a subdomain under com, while sales is a sub-domain under example. The name furthest to the left, bob, identi fies a specific machine. *

Page 376: Linux-Training-Volume1

Except for the hostname, each section is a ca lled a zone, which defines a specific namespace. * A namespace controls the naming of the sub-do mains to its left. While this example only contains two sub-domains, a FQDN must contain at least one sub-domain but may include many more, depending upon ho w the namespace is organized. * Zones are defined on authoritative nameserver s through the use of zone files, which describe the namespace of that zone, t he mail servers to be used for a particular domain or sub-domain, and more. * Zone files are stored on primary nameservers (also called master nameservers), which are truly authoritative and whe re changes are made to the zone files, and secondary nameservers (also called slave nameservers), which receive their zone files from the primary nameserve rs. * Any nameserver can be a primary and secondary nameserver for different zones at the same time, and they may also be consid ered authoritative for multiple zones. It all depends on how the nameserve r is configured. 10.2.2. Types of Nameservers * master -- Stores original and authoritative z one records for a certain namespace, answering questions from other nameserve rs searching for answers concerning that namespace. * slave -- Answers queries from other nameserve rs concerning namespaces for which it is considered an authority. However, slave nameservers get their namespace information from master nameservers. * caching only -- Offers name to IP resolution services but is not authoritative for any zones. Answers for all resolu tions are cached in memory for a fixed period of time, which is specified by t he retrieved zone record. * forwarding -- Forwards requests to a specific list of nameservers for name resolution. If none of the specified nameservers ca n perform the resolution, the resolution fails 10.2.3. BIND as a Nameserver * Bind performs name resolution services throug h the /usr/sbin/named deamon. *

Page 377: Linux-Training-Volume1

Bind also includes an administration utility called /usr/sbin/rndc. 10.2.3.1). Configuration Files /etc/named.conf : The named.conf file is a collecti on of statements using nested options surrounded by opening and closing ellipse c haracters, { }. /var/named directory: The named working directory w hich stores zone, statistics, and cache files. A typical named.conf file is organized similar to t he following example: <statement-1> ["<statement-1-name>"] [<statement-1- class>] { <option-1>; <option-2>; <option-N>; }; <statement-2> ["<statement-2-name>"] [<statement-2- class>] { <option-1>; <option-2>; <option-N>; }; Named and Bind will be discussed in more detail in later sections. 10.3. File Transfer Program or FTP * FTP - Internet file transfer program. *

Page 378: Linux-Training-Volume1

The FTP utility program is commonly used for copying files to and from other computers * Command usage, $ ftp [-pinegvd] [hostname] p – passive mode transfer i - turnoff interactive mode n – restrains ftp from attempting an auto login e – disables command editing and history support g – disables file name globbing v – shows all responses from the remote server d – enables debugging 10.3.1. FTP server/client * The FTP server program can be proftpd, pureft pd, vsftpd which will be dealt in more detail later. * The FTP server runs on port 21 on the server and uses the tcp protocol * The FTP client server could be 3rd party soft wares like wsftpd, smartftp, or the simple ftp user interface on a linux machine . 10.3.2. FTP Commandline Interface * To connect your local machine to the remote m achine, type $ ftp [options] machinename/IP_Address where machinename is the full hostname of the remot e machine, or its IP address * In order to login to ftp on a remote machine, you require an ftp login username and password on the remote machine. When y ou enter your own loginname and password for the remote machine, it returns the prompt below which means you are connected to the ftp server on the remote machi ne ftp>

Page 379: Linux-Training-Volume1

* Once you are logged in, ftp permits you acces s to your own home directory on the remote machine. * You should be able to move around in your own directory and to copy files to and from your local machine using the FTP interf ace commands given on the next page. FTP Active and Passive Mode FTP transfers using the FTP Protocol involve two TC P connections. The first control connection goes from the FTP client to port 21 on the FTP server. This connection is used for logon and to send commands a nd responses between the endpoints. Data transfers (including the output of “ls†� and “dir†� commands)requires a second data connection. The dat a connection is dependent on the mode that the client is operating in: Active Mode In active mode FTP the client connects from a rando m unprivileged port (N > 1024) to the FTP server's command port, port 21. Th en, the client starts listening to port N+1 and sends the FTP command POR T N+1 to the FTP server. The server will then connect back to the client's speci fied data port from its local data port, which is port 20 for transferring data. FTP Active Mode Data Transfer Passive Mode Passive mode is named after the command PASV used b y the client to tell the server it is in passive mode. In passive mode FTP the client initiates both conne ctions to the server, solving the problem of firewalls filtering the incoming dat a port connection to the client from the server. When opening an FTP connect ion, the client opens two random unprivileged ports locally (N > 1024 and N+1 ). The first port contacts the server on port 21, but instead of then issuing a PORT command and allowing the server to connect back to its data port, the cl ient will issue the PASV command. The result of this is that the server then opens a random unprivileged port (P > 1024) and sends the PORT P command back t o the client. The client then initiates the connection from port N+1 to port P on the server to transfer data. FTP Passive Mode Data Transfer 10.3.2.1) Anonymous FTP *

Page 380: Linux-Training-Volume1

At times you may wish to copy files from a re mote machine on which you do not have a loginname. This can be done using anonym ous FTP. * When the remote machine asks for your loginna me, you should type in the word anonymous. Instead of a password, you should e nter your own electronic mail address. This allows the remote site to keep record s of the anonymous FTP requests. * Once you have been logged in, you are in the anonymous directory for the remote machine. This usually contains a number of p ublic files and directories. Again you should be able to move around in these di rectories. * However, you are only able to copy the files from the remote machine to your own local machine; you are not able to write o n the remote machine or to delete any files there. 10.3.2.2) Common FTP Commands FTP Command Meaning ? to request help or information about the FTP ascii to set the mode of file transfer to ASCII (this is the default and transmits seven bits per character) binary to set the mode of file transfer to binary(the bina ry mode transmits all eight bits per byte and thus provides less chance of a tr ansmission error and must be used to transmit files other than ASCII files) bye to exit the FTP environment (same as quit) close to terminate a connection with another computer

Page 381: Linux-Training-Volume1

cd to change directory on the remote machine delete to delete (remove) a file in the current remote dir ectory (same as rm in UNIX) get to copy one file from the remote machine to the loc al machine. > get ABC DEF --- copies file ABC in the current re mote directory to a file named DEF in your current local directory. help to request a list of all available FTP commands lcd to change directory on your local machine (same as UNIX cd) ls to list the names of the files in the current remot e directory mget to copy multiple files from the remote machine to t he local machine; mget * ---- copies all the files in the current remote director y to your current local directory, using the same filenames. mput to copy multiple files from the local machine to th e remote machine;you are prompted for a y/n answer before transferring each file mkdir to make a new directory within the current remote put to copy one file from the local machine to the remo te machine

Page 382: Linux-Training-Volume1

pwd to find out the pathname of the current directory o n the remote machine quit to exit the FTP environment (same as bye) rmdir remove (delete) a directory in the current remote d irectory open to open a connection with another computer ftp> open carma.com 10.4. Service Manager : chkconfig ,ntsysv , xinetd 10.4.1. ChkConfig * Chkconfig updates and queries runlevel inform ation for system services. * chkconfig provides a simple command-line tool for maintaining the /etc/rc[0-6].d directory hierarchy by relieving sys tem administrators of the task of directly manipulating the numerous symbolic links in those directories. * Chkconfig has five distinct functions: 1. Adding new services for management 2. Removing services from management 3. Listing the current startup information for services 4. Changing the startup information for se rvices, and 5. Checking the startup state of a particu lar service. *

Page 383: Linux-Training-Volume1

When chkconfig is executed without any option s, it displays usage information. If only a service name is given, it ch ecks to see if the service is configured to be started in the current runlevel. * If it is, chkconfig returns true; otherwise i t returns false.The --level option may be used to have chkconfig query an alter native runlevel rather than the current one. * If one of on, off, or reset is specified afte r the service name, chkconfig changes the startup information for the specified s ervice. * The on and off flags cause the service to be started or stopped, respectively, in the runlevels being changed. * The reset flag resets the startup information for the service to whatever is specified in the init script in question. * By default, the on and off options affect onl y runlevels 2, 3, 4, and 5, while reset affects all of the runlevels. The --lev el option may be used to specify which runlevels are affected. * Chkconfig requires the chkconfig rpm installe d on the server.To see the version of rpm installed, use $ rpm –qa chkconfig 10.4.1.1). Chkconfig commandline Usage * Note that for every service, each runlevel ha s either a start script or a stop script. When switching runlevels, init will no t re-start an already-started service, and will not re-stop a service that is not running. * Command usage $ chkconfig --list [name] $ chkconfig --add name $ chkconfig --del name $ chkconfig [--level levels] name <on|off|reset> $ chkconfig [--level levels] name

Page 384: Linux-Training-Volume1

name is the name of the service to be configured * Examples: $ chkconfig –list nfs $ chkconfig –add nfs $ chkconfig -level 1235 nfs on $ chkconfig –del nfs 10.4.2. Ntsysv * Ntsysv provides a simple interface for settin g which system services are started or stopped in various runlevels (instead of directly manipulating the numerous symbolic links in /etc/rc.d).It again uses chkconfig for its configuration. $ ntsysv * By default it configures the current run leve l. To configure other runlevels, you can use the option below which will set the services for runlevels 2,3,5. $ ntsysv –level 235 10.4.3. Xinetd Services * To control access to network services, you ca n use xinetd, a secure replacement for inetd. * The xinetd daemon conserves system resources, provides access control and logging, and can be used to start special-purpose s ervers. * xinetd can be used to o provide access only to particular hosts . o deny access to particular hosts. o provide access to a service at certain times. o

Page 385: Linux-Training-Volume1

limit the rate of incoming connections . o limit the load created by connections, etc. * xinetd runs constantly and listens on all of the ports for the services it manages. When a connection request arrives for one of its managed services, xinetd starts up the appropriate server for that se rvice. * The configuration file for xinetd is /etc/xin etd.conf, but you'll notice upon inspection of the file that it just contains a few defaults and an instruction to include the /etc/xinetd.d directory. * To enable or disable a xinetd service, edit i ts configuration file in the /etc/xinetd.d directory. * If the disable attribute is set to yes, the s ervice is disabled. * If the disable attribute is set to no, the se rvice is enabled. * If you edit any of the xinetd configuration f iles or change its enabled status using ntsysv or chkconfig, you must restart xinetd with the command service xinetd restart before the changes will take effect. $ /etc/rc.d/init.d/xinetd stop/start/restart * More about xinetd is discussed in a later sec tion on Security -> TCP wrappers and Xinetd. 10.5. Telnet Program * Telnet is a program that allows users to log into your server and get a command prompt just as if they were logged into the console. * Telnet is installed and enabled by default on RedHat Linux. * One of the disadvantages of Telnet is that th e data is sent as clear text. This means that it is possible for someone to use a network analyzer to peek into your data packets and see your username and pa ssword. *

Page 386: Linux-Training-Volume1

Telnet is configured via xinetd. The configur ation file is at /etc/xinetd.d/telnet file. Once the changes are mad e you need to restart the xinetd deamon. $ /etc/rc.d/init.d/xinetd restart * You could telnet to a machine by using the te lnet client program as, $ telnet <hostname/ipaddress> <port number> * There are a lot more options which are availa ble to the telnet command which could be viewed at man telnet from a shell. 10.6. Dynamic Host Configuration Protocol (DHCP) Dynamic Host Configuration Protocol (DHCP) is a net work protocol for automatically assigning TCP/IP information to clien t machines. * Each DHCP client connects to the centrally-lo cated DHCP server which returns that client's network configuration includi ng IP address, gateway, and DNS servers. 10.6.1. Advantages of DHCP * DHCP is useful for fast delivery of client ne twork configuration. * When configuring the client system, the admin istrator can choose DHCP and not have to enter an IP address, netmask, gateway, or DNS servers. The client retrieves this information from the DHCP server. * DHCP is also useful if an administrator wants to change the IP addresses of a large number of systems. Instead of reconfigur ing all the systems, he can just edit one DHCP configuration file on the server for the new set of IP address. * If the DNS servers for an organization change s, the changes are made on the DHCP server, not on the DHCP clients. Once the network is restarted on the clients (or the clients are rebooted), the changes will take effect.

Page 387: Linux-Training-Volume1

10.6.2. DHCP server/Client For DHCP server, download and install the dhcp rpm package. 10.6.2.1). DHCP server configuration file * The first step in configuring a DHCP server i s to create the configuration file that stores the network information for the cl ients * The configuration file that it uses is /etc/d hcpd.conf. * It allows you to define “pools†� of TCP/ IP addresses, which are then allocated to client PCs by the server. 10.6.2.2). DHCP communication between server-client * The conversation between the DHCP client (the computer requesting an IP address) and the DHCP server (the computer responsi ble for assigning IP addresses) follows a specific pattern. o First, the client sends out a broadcast message asking DHCP servers to reply with an offer of an IP address. This is a DHCP Discover message. The DHCP standard allows multiple servers to reply with an offer. The Discover message can contain suggestions to the servers for an IP address and other IP parameters. Note that this is only a suggestion. o The second step in the process is for D HCP servers to respond to the Discover message with an Offer message. The Offer m essage contains, among other things, the IP address and the domain name server a ddress the DHCP server is offering. It also contains a lease period. The lease period is an important part of the assign ment process. the DHCP server “leases†� you an IP address for a specific period of time. O nce the lease expires, the IP address becomes available for other s to use. If you are a permanent network user, your computer periodically renews its lease. o During the third step in the DHCP negot iation process, the client sends a DHCP Request message back to the DHCP serve r requesting a specific IP

Page 388: Linux-Training-Volume1

address. The request also includes something called the server identifier (usually the IP address of the DHCP server) as a ch eck to confirm that the request is being made of the correct DHCP server. ( More than one DHCP server can offer an address to the client.) o In the fourth and final step, the DHCP server sends a DHCP ACK message, acknowledging the IP address assignment. * The figure below illustrates the complete pro cess * The DHCP process uses a protocol called BOOTP . This protocol was based upon Reverse Address Resolution Protocol (RARP), wh ich was one of the first attempts to allocate network addresses dynamically. BOOTP (DHCP) rides upon User Datagram Protocol (UDP). As a result, delivery of D HCP messages is not guaranteed. * There are two ways that a DHCP address can be put back into the pool. One way is for the lease to expire. The other way is fo r the client to send a Release message to the DHCP server * Messages targeted at the DHCP server are sent as broadcast messages with the special address of 255.255.255.255. Any message s with this destination address are intended to be “read†� by all network devices. More than one DHCP server could respond to a DHCP Discover message, so these messages should be sent to everyone. Once the DHCP ACK message has bee n sent, the client may begin using the assigned IP address. 10.6.2.3). DHCP Client configuration * To configure the DHCP client manually, you ne ed to modify the /etc/sysconfig/network file as below to enable netw orking.

Page 389: Linux-Training-Volume1

NETWORKING=yes * The /etc/sysconfig/network-scripts/ifcfg-eth0 file should contain the following lines: DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes 10.7. Linux Samba Server Samba is a strong network service for file and prin ter sharing that works on the majority of operating systems available today. * The packages that samba uses is samba-version .tar.gz Samba Homepage: http://us1.samba.org/samba/samba.ht ml Samba FTP Site: 63.238.153.11 You need to download: samba-2.0.7.tar.gz or samba-v ersion.tar.gz * The tar file needs to be uncompressed and the samba package configured and compiled. 10.7.1. Samba configuration file * The configuration file that samba uses is /et c/smb.conf. * In this file, you can specify which directory you want to access from Windows machines, which IP addresses are authorized , and so on. *

Page 390: Linux-Training-Volume1

The first few lines of the file under the [gl obal] line contain global configuration directives, which are common to all s hares, unless they are over-ridden on a per-share basis, followed by share sect ions. Reference url for smb.conf file : http://www.faqs.org/docs/securing/chap29sec284.html 10.7.1. Samba password file for Clients * The /etc/smbpasswd file is the Samba encrypte d password file. It contains the username; Unix UID and SMB hashed passwords of the allowed users to your Samba server, as well as account flag information a nd the time the password was last changed. * It's important to create this password file a nd include all allowed users to it before your clients try to connect to your Sa mba server. Without this step, no one will be able to connect to your Samba server. * To create a Samba account you must first have a valid Linux account for them, so create in your /etc/passwd file all the us ers you want to connect to your Samba server first before generating the smbpa sswd file of Samba. * To add a new users to your /etc/passwd file, use the following commands: $ useradd smbclient $ passwd smbclient * Once you have added all Samba clients in your /etc/passwd file on the Linux server, you now need to generate the smbpassw d file from the /etc/passwd file. Reference url : http://www.faqs.org/docs/securing/c hap29sec286.html 10.8. Linux Proxy Server – Squid * The utility squid is an internet proxy server that can be used within a network to distribute an internet connection to all the computers within the network. *

Page 391: Linux-Training-Volume1

Because it is a proxy, it has the capabilitie s to log all user actions such as the URLs visited. 10.8.1. Squid Package and Config File * Squid uses the rpm squid or the squid package can be installed from source tarball. * Squid uses the config file /etc/squid/squid.c onf. Access through the proxy can be given by individual IP addresses or by a sub net of IP addresses. * In squid.conf search for the default access c ontrol lists(acl) and add the following lines below them: acl mynetwork src 192.168.1.0/255.255.255.0 (fo r subnet) or acl mynetwork src 192.168.1.0/24 ( for subnet) acl mynetwork src 192.168.1.10/255.255.255.0 (f or individual IP) * Then add the access control list named "mynet work" to the http_access list with the following line: # http_access allow mynetwork * The default port for the proxy is 3128. Uncom ment the following line and replace 3128 with the desired port : http_port 3128 10.8.2. Stopping , Starting and Restarting Squid * Starting squid $ /etc/rc.d/init.d/squid start * Restarting squid

Page 392: Linux-Training-Volume1

$ /etc/rc.d/init.d/squid restart * Stopping squid $ /etc/rc.d/init.d/squid stop 10.8.3. Configuring squid Clients * To configure any application including a web browser to use squid, modify the proxy setting with the IP address of the squid server and the port number (default 3128). 11. SECURING LINUX SYSTEMS Linux was built from the ground up with security in mind. However, this security will amount to nothing if some basic security measu res are not adopted. "Security is not an option, but a way of life". Thi s is the mantra given by Kurt Seifried, the author of the famed 'Linux Administra tors Security Guide' which holds true for all linux systems. This section will discuss various means with which you can secure the assets you have worked hard for: your local machine, your data , your users, your network. 11.1. Physical Security * Physical security should be of the utmost con cern. Linux production servers should be in locked datacenters where only people with passed security checks have access. * Since we assume that all Linux production sys tems are physically secured, we will not cover the configuration of a boot loade r password. This could actually pose a problem for rebooting servers remot ely. 11.2. Local Security * Here we discuss the security of the system ag ainst attacks from local users.

Page 393: Linux-Training-Volume1

* Getting access to a local user account is one of the first things that system intruders attempt while on their way to expl oiting the root account. * With lax local security, they can then "upgra de" their normal user access to root access using a variety of bugs and poorly s etup local services. * If you make sure your local security is tight , then the intruder will have another hurdle to jump. 11.2.1. Checking for Unlocked Accounts * It is important that all system and vendor ac counts that are not used for logins are locked. Since no one is using them, they provide the ideal attack vehicle. * To get a list of unlocked accounts on your sy stem, you can check for accounts that do NOT have an encrypted password str ing starting with "!" or "*" in the /etc/shadow file. * If you lock an account using passwd -l, it wi ll put a '!!' in front of the encrypted password, effectively disabling the passw ord. If you lock an account using usermod -L, it will put a '!' in front of the encrypted password. * Many system and shared accounts are usually l ocked by default by having a '*' or '!!' in the password field which renders the encrypted password into an invalid string. Hence, to get a list of all unlocked (encryptable) accounts, run: $ egrep -v '.*:\*|:!' /etc/shadow | awk -F: '{print $1}' *

Page 394: Linux-Training-Volume1

Also make sure all accounts have a 'x' in the password field in /etc/passwd. The following command lists all accoun ts that do not have a 'x' in the password field: $ grep -v ':x:' /etc/passwd * A 'x' in the password field means that the pa ssword has been shadowed, i.e. the encrypted password has to be looked up in the /etc/shadow file. * If the password field in /etc/passwd is empty , then the system will not lookup the shadow file and it will not prompt the u ser for a password at the login prompt. 11.2.2. Checking for Unused Accounts * All system or vendor accounts that are not be ing used by users, applications, by the system or by daemons should be removed from the system. You can use the following command to find out if there are any files owned by a specific account: $ find / -path /proc -prune -o -user <account> -ls * The -prune option in this example is used to skip the /proc filesystem. * If you are sure that an account can be delete d, you can remove the account using the following command: $ userdel -r <account> * Without the "-r" option userdel will not dele te the user's home directory and mail spool (/var/spool/mail/<user>). Note that many system accounts have no home directory. 11.3. Files and File system Security

Page 395: Linux-Training-Volume1

11.3.1. Default Umask * The umask (user file-creation mode mask) comm and is a shell built-in command which determines the default file permissio ns for newly created files. This can be overwritten by system calls but many pr ograms and utilities make use of umask. * Configure your users' file-creation umask to be as restrictive as possible. * By default, Red Hat sets umask to 022 or 002 which is fine. * If the name of the user account and the group account is the same and the UID is 100 or larger, then umask is set to 002, oth erwise it's set to 022. 11.3.2. SUID/SGID Files * There should never be a reason for users' hom e directories to allow SUID/SGID programs to be run from there. * Use the nosuid option in /etc/fstab for parti tions that are writable by others than root. * You may also wish to use nodev and noexec on /tmp partitions, as well as /var/tmp, thus prohibiting execution of programs, a nd creation of character or block devices, which should never be necessary anyw ay. * SUID and SGID files on your system are a pote ntial security risk, and should be monitored closely. Because these programs grant special privileges to

Page 396: Linux-Training-Volume1

the user who is executing them, it is necessary to ensure that insecure programs are not installed. * A favorite trick of crackers is to exploit SU ID-root programs, then leave a SUID program as a back door to get in the next ti me, even if the original hole is plugged. * Find all SUID/SGID programs on your system, a nd keep track of what they are, so you are aware of any changes which could in dicate a potential intruder. Use the following command to find all SUID/SGID fil es on your system: $ find / -path /proc -prune -o -type f -perm +6000 -ls 11.3.3. World-Writable Files * World-writable files are a security risk sinc e it allows anyone to modify them. Additionally, world-writable directories allo w anyone to add or delete files. * To locate world-writable files and directorie s, you can use the following command: $ find / -path /proc -prune -o -perm -2 ! -type l - ls * The "! -type l" parameter skips all symbolic links since symbolic links are always world-writable. However, this is not a p roblem as long as the target of the link is not world-writable, which is checked by the above find command. * World-Writable directories with sticky bit su ch as the /tmp directory do not allow anyone to delete or modify files in this directory. * The sticky bit makes files stick to the user who created it and it prevents other users from deleting and renaming the files. Therefore, depending

Page 397: Linux-Training-Volume1

on the purpose of the directory , world-writable di rectories with sticky are usually not an issue. An example is the /tmp direct ory. 11.3.4. Setting File System Limits * Set file system limits instead of allowing un limited as is the default. You can control the per-user limits using the resou rce-limits PAM module and /etc/pam.d/limits.conf. For example, limits for gro up users might look like this: @users hard core 0 @users hard nproc 50 @users hard rss 5000 * This says to prohibit the creation of core fi les, restrict the number of processes to 50, and restrict memory usage per user to 5M. 11.3.5. Unowned Files * Unowned files may also be an indication an in truder has accessed your system. * You can locate files on your system that have no owner, or belong to no group with the command: $ find / -path /proc -prune -o -nouser -o -nogroup 11.3.6. Protecting Binaries like Compilers * The immutable bit can be used to prevent acci dentally deleting or overwriting a file that must be protected like some of the system binaries. *

Page 398: Linux-Training-Volume1

It also prevents someone from creating a hard link to the file.For example, setting the gcc compiler to immutable is a good idea $ chattr +ia /usr/bin/gcc 11.3.7. Integrity Checking * Another very good way to detect local (and al so network) attacks on your system is to run an integrity checker like Tripwire or ChkRootkit. * These integrety checkers run a number of chec ksums on all your important binaries and config files and compares them against a database of former, known-good values as a reference. Thus, any changes in th e files will be flagged. * It's a good idea to run it as part of your no rmal security administration duties to see if anything has changed as a part of the cron job as below. # set mailto [email protected] # run Tripwire 15 05 * * * root /usr/local/adm/tcheck/tripwire OR # set mailto [email protected] # run Tripwire 15 05 * * * sh /root/chkrootkit-3.2/chkrootkit * You can find the freely available unsusported version of Tripwire at http://www.tripwire.org, free of charge. *

Page 399: Linux-Training-Volume1

And the latest version of chkrootkit is avail able for download from http://www.chkrootkit.org/download/ 11.3.8. Trojan Horses, Backdoors and Rootkits * Trojan Horses are malicious, security-breakin g programs that is disguised as something benign. The idea is that a cracker dis tributes a program or binary that sounds great, and encourages other people to d ownload it and run it as root. Then the program can compromise their system. * The crackers who thereby gain access to the s ystem can create backdoors which will later allow them to re-enter the system. * Furthermore, they may also use Trojan root ki ts to hide the Trojan horse such as a “trojaned†� /bin/ps to hide their daemons. * Usually on a trojan horse infected machine, i ntruders often replace the important system binaries such as ps, fuser, netsta t , lsmod, find, cp, move, kill etc with Trojan horse equivalents, so do not u se these tools on the machine you are investigating unless you have verified that they haven’t been altered or replaced. * To find out if a system is infected by a troj an or backdoor rootkits, it may be necessary to check currently running daemons to see whether a Trojan horse has infected them. * For example, the syslogd process could have b een compromised and, instead of the valid daemon running on UDP port 514, there could be a Trojan daemon on that port. * Therefore, all binaries or running daemons ma y need to be checked for validity. This could become a time-consuming task t hat may not be worth the time or money.

Page 400: Linux-Training-Volume1

* This task could be made easier by using open source third-party tools which can detect a Trojan horse or to supplement th e toolkit of well-known binaries. * One such tool is called chkrootkit (http://ww w.chkrootkit.org), which can detect a rootkit that has been installed as part of the Trojan horse. * Chkrootkit looks for known “signatures†� in trojaned system binaries. It can detect rootkits such as the Ramen Worm, the T0rn rootkit, or the Ambient’s Rootkit for Linux, just to name a few. It can also detect promiscuous interfaces. * Back Orifice and NetBus are two popular troja ns that affect linux machines. * An interesting reference url on trojan horses is : http://www.samag.com/documents/s=7467/sam0208e/0208 e.htm 11.3.8.1). Nmap tool * Nmap is a Network exploration tool and securi ty scanner. * Nmap is designed to allow system administrato rs and curious individuals to scan large networks to determine which hosts are up and what services they are offering. * Depending on options used, nmap may also repo rt the following characteristics of the remote host: OS in use, TCP sequencability, usernames running the programs which have bound to each port, the DNS name, whether the host is a smurf address, and a few others.

Page 401: Linux-Training-Volume1

* For example, consider the instance where a ma chine is infected by a trojan horse, and we need to check if the trojan is listen ing to any port on the machine ( usually a backdoor to re-enter the machin e later). * The following options can be used with the nm ap commandline to scan for all open tcp and udp ports on the machine. $ /toolkit/nmap -sU -sS -p 1-65535 localhost Here is an example of the results: Starting nmap V. 2.54BETA22 ( www.insecure.org/ nmap/ ) Interesting ports on localhost.localdomain (127 .0.0.1): (The 131048 ports scanned but not shown below a re in state: closed) Port State Service Unable to find nmap-services! Resorting to /etc /services 25/tcp open smtp 53/tcp open domain 53/udp open domain 80/tcp open http 110/tcp open pop3 111/tcp open sunrpc 111/udp open sunrpc 137/udp open netbios-ns 138/udp open netbios-dgm 139/tcp open netbios-ssn 143/tcp open imap 389/tcp open ldap 443/tcp open https 515/tcp open printer

Page 402: Linux-Training-Volume1

617/tcp open unknown 5222/tcp open unknown 5269/tcp open unknown 8383/tcp open unknown 10000/udp open unknown 19635/tcp open unknown 35737/udp open unknown * Nmap automatically tries to map port numbers with service names in /etc/services, but it returns the unknown if it doe sn’t find anything. * If, after checking your nmap results, you don ’t recall anything on your machine that should be listening on tcp port 19635, you can find out by using the fuser command. * To determine which process is running on this port number, run the following: $ fuser –vn tcp 19635 $ /toolkit/fuser -vn tcp 19635 USER PID ACCESS COMMAND 19635/tcp root 32444 f.... http * This indicates that there is a process named “http†� running with PID 32444 and listening on port 19635. This http proces s is not the Apache Web server. If we missed this before, we would now know that the Trojan horse disguised itself by blending in with the multiple v alid httpd processes running on the machine. 11.4. Password Security and Encryption *

Page 403: Linux-Training-Volume1

One of the most important security features u sed today are passwords. It is important for both you and all your users to hav e secure, unguessable passwords. * Most of the more recent Linux distributions i nclude passwd programs that do not allow you to set a easily guessable password . Make sure your passwd program is up to date and has these features. 11.4.1. Encryption Methods 11.4.1.1). DES (Data Encryption Standard) * Most Unix/Linux primarily use a one-way encry ption algorithm, called DES (Data Encryption Standard) to encrypt your password s. * This encrypted password is then stored in (ty pically) /etc/passwd or (less commonly) in /etc/shadow. * When you attempt to login, the password you t ype in is encrypted again and compared with the entry in the file that stores you r passwords. * If they match, it must be the same password, and you are allowed access. Although DES is a two-way encryption algorithm (you can code and then decode a message, given the right keys), the variant that mo st Unixes use is one-way. * This means that it should not be possible to reverse the encryption to get the password from the contents of /etc/passwd (or / etc/shadow). 11.4.1.2). PGP and Public-Key Cryptography * Public-key cryptography, such as that used fo r PGP, uses one key for encryption, and one key for decryption.

Page 404: Linux-Training-Volume1

* To alleviate the need to securely transmit th e encryption key, public-key encryption uses two separate keys: a public key and a private key. * Each person's public key is available by anyo ne to do the encryption, while at the same time each person keeps his or her private key to decrypt messages encrypted with the correct public key. * PGP (Pretty Good Privacy) is well-supported o n Linux. GnuPG is a complete and free replacement for PGP and is in compliance w ith OpenPGP. 11.4.2. Authentication Methods 11.4.2.1). PAM - Pluggable Authentication Modules * PAM is an authentication scheme that allows y ou to change your authentication methods and requirements on the fly, and encapsulate all local authentication methods without recompiling any of y our binaries. * Some of the features that can be done with PA M are: o Use encryption other than DES for your passwords. (Making them harder to brute-force decode) o Set resource limits on all your users s o they can't perform denial-of-service attacks (number of processes, amount of memory, etc) o Enable shadow passwords on the fly . o

Page 405: Linux-Training-Volume1

Allow specific users to login only at s pecific times from specific places 11.4.2.2). Cryptographic IP Encapsulation (CIPE) * The primary goal of this software is to provi de a facility for secure (against eavesdropping, including traffic analysis, and faked message injection) subnetwork interconnection across an insecure packe t network such as the Internet. * CIPE encrypts the data at the network level. Packets traveling between hosts on the network are encrypted. * The encryption engine is placed near the driv er which sends and receives packets. * This is unlike SSH, which encrypts the data b y connection, at the socket level. A logical connection between programs runnin g on different hosts is encrypted. * CIPE can be used in tunnelling, in order to c reate a Virtual Private Network. Low-level encryption has the advantage tha t it can be made to work transparently between the two networks connected in the VPN, without any change to application software. 11.4.2.3). Kerberos * Kerberos is an authentication system develope d by the Athena Project at MIT. * When a user logs in, Kerberos authenticates t hat user (using a password), and provides the user with a way to prove her ident ity to other servers and hosts scattered around the network.

Page 406: Linux-Training-Volume1

* This authentication is then used by programs such as rlogin to allow the user to login to other hosts without a password (in place of the .rhosts file). * Kerberos and the other programs that come wit h it, prevent users from "spoofing" the system into believing they are someo ne else. 11.4.3. Enforcing Stronger Passwords * It is important to restrict people from using simple passwords that can be cracked too easily. However, if the passwords being enforced are too strong, people start writing them down. Strong passwords th at are written down are not much safer than weak passwords. * The pam_cracklib module checks the password a gainst dictionary words and other constraints. The following example shows how to enforce the foll owing password rules: - Minimum length of password must be 8 - Minimum number of lower case letters must be 1 - Minimum number of upper case letters must be 1 - Minimum number of digits must be 1 - Minimum number of other characters must be 1 pam_cracklib.so minlen=8 Minimum length of password is 8 pam_cracklib.so

Page 407: Linux-Training-Volume1

lcredit=-1 Minimum number of lower case letters is 1 pam_cracklib.so ucredit=-1 Minimum number of upper case letters is 1 pam_cracklib.so dcredit=-1 Minimum number of digits is 1 pam_cracklib.so ocredit=-1 Minimum number of other characters is 1 * To setup these password restrictions, edit th e /etc/pam.d/system-auth file and add/change the following pam_cracklib arguments highlighted in blue: auth required /lib/security/$ISA/pam_env.so auth sufficient /lib/security/$ISA/pam_unix.so likeauth nullok auth required /lib/security/$ISA/pam_deny.so account required /lib/security/$ISA/pam_unix.so

Page 408: Linux-Training-Volume1

account sufficient /lib/security/$ISA/pam_succe ed_if.so uid < 100 quiet account required /lib/security/$ISA/pam_permit. so password requisite /lib/security/$ISA/pam_crack lib.so retry=3 minlen=8 lcredit=-1 ucredit=-1 dcredit=-1 ocredit=-1 password sufficient /lib/security/$ISA/pam_unix .so nullok use_authtok md5 shadow password required /lib/security/$ISA/pam_deny.s o session required /lib/security/$ISA/pam_limits. so session required /lib/security/$ISA/pam_unix.so * Now verify that the new password restrictions work for new passwords. * NOTE : The /etc/pam.d/system-auth PAM configu ration file is auto-generated and contains records which dictate a generic authen tication scheme and using the authconfig command will revert some of these change s that you made. 11.4.4. Locking User Accounts After Many Login Fail ures * The following example will show how to lock o nly individual user accounts after too many failed su or login attempts. * Add the following two lines highlighted in bl ue to the /etc/pam.d/system-auth file as shown below: auth required /lib/security/$ISA/pam_env.so auth required /lib/security/$ISA/pam_tally.so o nerr=fail no_magic_root auth sufficient /lib/security/$ISA/pam_unix.so likeauth nullok auth required /lib/security/$ISA/pam_deny.so account required /lib/security/$ISA/pam_unix.so

Page 409: Linux-Training-Volume1

account required /lib/security/$ISA/pam_tally.s o per_user deny=5 no_magic_root reset account sufficient /lib/security/$ISA/pam_succe ed_if.so uid < 100 quiet account required /lib/security/$ISA/pam_permit. so password requisite /lib/security/$ISA/pam_crack lib.so retry=3 password sufficient /lib/security/$ISA/pam_unix .so nullok use_authtok md5 shadow password required /lib/security/$ISA/pam_deny.s o session required /lib/security/$ISA/pam_limits. so session required /lib/security/$ISA/pam_unix.so * The first added line counts failed login and failed su attempts for each user. The default location for attempted accesses i s recorded in /var/log/faillog. * The second added line specifies to lock accou nts automatically after 5 failed login or su attempts (deny=5). The counter w ill be reset to 0 (reset) on successful entry if deny=n was not exceeded. 11.4.5. Restricting Direct Login for System/Shared Accounts All users should do a direct login using their own account and then switch to the system or shared account. Its always better to restrict direct login as root or other system or shared accounts. In this example we will discuss how to restrict dir ect logins for system or shared account : - SSH (/etc/pam.d/sshd) - Console Login (/etc/pam.d/login) - or for all logins (/etc/pam.d/system-auth) * For restricting direct SSH Logins add the pam _access module to /etc/pam.d/sshd as follows:

Page 410: Linux-Training-Volume1

auth required pam_stack.so service=system-auth auth required pam_nologin.so account required /lib/security/pam_access.so account required pam_stack.so service=system-au th password required pam_stack.so service=system-a uth session required pam_stack.so service=system-au th * For Console Logins add the pam_access module to /etc/pam.d/login as follows: auth required pam_securetty.so auth required pam_stack.so service=system-auth auth required pam_nologin.so account required /lib/security/pam_access.so account required pam_stack.so service=system-au th password required pam_stack.so service=system-a uth session required pam_selinux.so close session required pam_stack.so service=system-au th session optional pam_console.so session required pam_selinux.so multiple open * For all logins, add the following line to the /etc/security/access.conf configuration file: -:ALL EXCEPT users :ALL *

Page 411: Linux-Training-Volume1

The /etc/security/access.conf configuration f ile is read by the pam_access module. This entry specifies that no users are acce pted except users that are in the "users" group. * Since the pam_access module has been configur ed for "Authorization" (account) in the above PAM configuration files, it denies direct logins for all accounts except the ones that are in the "users" gr oup. * To disallow non-local logins to privileged ac counts (group wheel), add the following entry to /etc/security/access.conf -:wheel:ALL EXCEPT LOCAL server.hostname 11.4.6. Password Cracking/Brute Force Attack * Password cracking programs work on a simple i dea: they try every word in the dictionary, and then variations on those words, encrypting each one and checking it against your encrypted password. If the y get a match they know what your password is. * A brute force attack consists of trying every possible code, combination, or password until you find the right one. 11.4.6.1). How the brute force attack works? 1. Manual login attempts, they will try to type in a few usernames and passwords 2. Dictionary based attacks, automated scr ipts and programs will try guessing thousands of usernames and passwords from a dictionary file, sometimes a file for usernames and another file for passwords . 3.

Page 412: Linux-Training-Volume1

Generated logins, a cracking program wi ll generate random usernames set by the user. They could generate numbers only, a combination of numbers and letters or other combinations. 11.4.6.2). Signs of a brute force attempt * You can easily spot a brute force attempt by checking your servers log file - /var/log/messages. * You will see a series of failed login attempt s for the service they’re trying to break into. * A sample failed login is shown below: Check for failed login attemps such as: Apr 11 19:02:10 fox proftpd[6950]: yourserver ( usersip[usersip]) - USER theusername (Login failed): Incorrect password. 11.4.6.3). Tools to stop and prevent brute force ha ck attempts * Never enable demo or guest accounts as they w ill be the first way an attacker will get access into your system and furth er exploit it. * Never have more than one user in the root gro up. * Install the APF Firewall and Brute Force Dete ction(BFD) Software which is a modular shell script for parsing applicable logs and checking for authentication failures. *

Page 413: Linux-Training-Volume1

If it finds that your authentication failed t he set amount of times for an application, it will ban your IP address using APF firewall. * APF is a firewall that works using iptables b ut has some nice features added and makes it easy to use, including Anti-Dos protection. * The two of these make an excellent, automated brute force prevention package. * BFD checks your logs every few minutes for mu ltiple failed logins attempts, based on a set of rules, if the person fa ils to login X amount of times the IP is automatically banned at the firewal l, preventing further attacks on your system. 11.5. Network Security * Network security is becoming more and more im portant as people spend more and more time connected. Compromising network secur ity is often much easier than compromising physical or local security, and is muc h more common. 11.5.1. Network Intruders and Attacks 11.5.1.1). Packet Sniffers * One of the most common ways intruders gain ac cess to more systems on the network is by employing a packet sniffer on an alre ady compromised host. * This "sniffer" just listens on the Ethernet p ort for things like passwd and login and su in the packet stream and then logs the traffic after that. * This way, attackers gain passwords for system s they are not even attempting to break into. Clear-text passwords are very vulnerable to this attack.

Page 414: Linux-Training-Volume1

* Example: Host A has been compromised. Attacke r installs a sniffer. Sniffer picks up admin logging into Host B from Host C. It gets the admins personal password as they login to B. * Then, the admin does a su to fix a problem. T hey now have the root password for Host B. Later the admin lets someone t elnet from his account to Host Z on another site. Now the attacker has a pass word/login on Host Z. * Using ssh or other encrypted password methods thwarts this attack. Things like APOP for POP accounts also prevents this attac k. (Normal POP logins are very vulnerable to this, as is anything that sends clear-text passwords over the network.) * The safest method to counteract this problem is by transmitting data over a secure network such as ssh in which case data is transmitted in an encypted format. 11.5.1.2). Denial Of Service (DOS) Attacks * Denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. * Examples include : o attempts to "flood" a network, thereby preventing legitimate network traffic . o attempts to disrupt connections between two machines, thereby preventing access to a service . o attempts to prevent a particular indivi dual from accessing a service. o

Page 415: Linux-Training-Volume1

attempts to disrupt service to a specif ic system or person. * Illegitimate use of resources may also result in denial of service. For example, an intruder may use your anonymous ftp are a as a place to store illegal copies of commercial software, consuming disk space and generating network traffic. Impact of DOS Attacks * Denial-of-service attacks can essentially dis able your computer or your network. Depending on the nature of your enterprise , this can effectively disable your organization. * Some denial-of-service attacks can be execute d with limited resources against a large, sophisticated site. This type of a ttack is sometimes called an "asymmetric attack." Modes Of Attack Denial-of-service attacks come in a variety of form s and aim at a variety of services. There are three basic types of attack: 1. Consumption of scarce, limited, or non-renewable resources. 2. Destruction or alteration of configurat ion information. 3. Physical destruction or alteration of n etwork components The Denial of Service Attack comes in the first cat egory and it can affect the resources on the server and crash the server in the following situations discussed below. 1. Consumption of Scarce Resources

Page 416: Linux-Training-Volume1

* Denial-of-service attacks are most frequently executed against network connectivity. The goal is to prevent hosts or netwo rks from communicating on the network. An example of this type of attack is the " SYN flood" attack. * In this type of attack, the attacker begins t he process of establishing a connection to the victim machine, but does it in su ch a way as to prevent the ultimate completion of the connection. * In the meantime, the victim machine has reser ved one of a limited number of data structures required to complete the impendi ng connection. The result is that legitimate connections are denied while the vi ctim machine is waiting to complete bogus "half-open" connections. * You should note that this type of attack does not depend on the attacker being able to consume your network bandwidth. In th is case, the intruder is consuming kernel data structures involved in establ ishing a network connection. * The implication is that an intruder can execu te this attack from a dial-up connection against a machine on a very fast network . (This is a good example of an asymmetric attack.) 2. Using Your Own Resources Against You * An intruder can also use your own resources a gainst you in unexpected ways. * An example is an attack in which the intruder uses forged UDP packets to connect to the echo service on one machine to the c hargen service on another machine. *

Page 417: Linux-Training-Volume1

The result is that the two services consume a ll available network bandwidth between them. Thus, the network connectiv ity for all machines on the same networks as either of the targeted machines ma y be affected. 3. Bandwidth Consumption * An intruder may also be able to consume all t he available bandwidth on your network by generating a large number of packet s directed to your network. * Typically, these packets are ICMP ECHO packet s, but in principle they may be anything. * Further, the intruder need not be operating f rom a single machine; he may be able to coordinate or co-opt several machines on different networks to achieve the same effect making it difficult to bloc k the IP address. 4. Consumption of Other Resources * In addition to network bandwidth, intruders m ay be able to consume other resources that your systems need in order to operat e. * For example, in many systems, a limited numbe r of data structures are available to hold process information (process iden tifiers, process table entries, process slots, etc.). * An intruder may be able to consume these data structures by writing a simple program or script that does nothing but repe atedly create copies of itself. * Many modern operating systems have quota faci lities to protect against this problem, but not all do. *

Page 418: Linux-Training-Volume1

An intruder may also attempt to consume disk space in other ways, including generating excessive numbers of bogus mai l messages to domains on the server called “email bombing spamming†�. * In general, anything that allows data to be w ritten to disk can be used to execute a denial-of-service attack if there are no bounds on the amount of data that can be written. * Also, many sites have schemes in place to "lo ckout" an account after a certain number of failed login attempts. A typical set up locks out an account after 3 or 5 failed login attempts. * An intruder may be able to use this scheme to prevent legitimate users from logging in. In some cases, even the privileged accounts, such as root or administrator, may be subject to this type of attac k. If your systems are experiencing frequent crashes w ith no apparent cause, it could be the result of these type of DOS attacks. Some of the more popular and recent DOS attacks are listed below. * SYN Flooding - SYN flooding is a network deni al of service attack. It takes advantage of a "loophole" in the way TCP conn ections are created.Its a common form of DOS attack to the Apache webserver. The newer Linux kernels (2.0.30 and up) have several configurable options t o prevent SYN flood attacks from denying people access to your machine or servi ces. * Pentium "F00F" Bug - It was recently discover ed that a series of assembly codes sent to a genuine Intel Pentium processor wou ld reboot the machine. This affects every machine with a Pentium processor (not clones, not Pentium Pro or PII), no matter what operating system it's running. Linux kernels 2.0.32 and up contain a work around for this bug, preventing it f rom locking your machine. * Ping Flooding - Ping flooding is a simple bru te-force denial of service attack. The attacker sends a "flood" of ICMP packet s to your machine. If they

Page 419: Linux-Training-Volume1

are doing this from a host with better bandwidth th an yours, your machine will be unable to send anything on the network. * A variation on this attack, called "smurfing" , sends ICMP packets to a host with your machine's return IP, allowing them t o flood you less detectably. You can find more information about the "smurf" att ack at http://www.quadrunner.com/~chuegen/smurf.txt * If you are ever under a ping flood attack, us e a tool like tcpdump to determine where the packets are coming from (or app ear to be coming from), then contact your provider with this information. Ping f loods can most easily be stopped at the router level or by using a firewall. * Ping o' Death - The Ping o' Death attack send s ICMP ECHO REQUEST packets that are too large to fit in the kernel data struct ures intended to store them. Because sending a single, large (65,510 bytes) "pin g" packet to many systems will cause them to hang or even crash, this problem was quickly dubbed the "Ping o' Death." This one has long been fixed, and is no longer anything to worry about. * Teardrop / New Tear - One of the most recent exploits involves a bug present in the IP fragmentation code on Linux and W indows platforms. It is fixed in kernel version 2.0.33, and does not require sele cting any kernel compile-time options to utilize the fix. Linux is apparently not vulnerable to the "newtear" exploit. 11.5.1.3). Attacks via IP Spoofing * A spoofing attack involves forging one's sour ce address. It is the act of using one machine to impersonate another. For examp le, Let your IP Address be: 203.45.98.01 (REAL) Let the IP Address of the Victim computer be: 202.1 4.12.1 (VICTIM) Let the IP Address of the system you want data to b e sent from: 173.23.45.89 (FAKE) *

Page 420: Linux-Training-Volume1

Normally sitting on the computer whose IP is REAL, the datagrams you send to VICTIM will appear to have come from REAL. Now c onsider a situation in which you want to send data packets to VICTIM and make hi m believe that they came from a computer whose IP is FAKE i.e.173.23.45.89. This is when you perform IP Spoofing. * Most of the applications and tools in UNIX re ly on the source IP address authentication. Many developers have used the host based access controls to secure their networks. * Source IP address is a unique identifier but not a reliable one. It can easily be spoofed using some IP spoofing tools as b elow. o Mendax for Linux : Mendax is an easy-to -use tool for TCP sequence number prediction and rshd spoofing. o Ipspoof : ipspoof is a TCP and IP spoof ing utility. o Hunt : hunt is a sniffer which also off ers many spoofing functions. o Dsniff :dsniff is a collection of tools for network auditing and penetration testing. dsniff, filesnarf, mailsnarf, msgsnarf, urlsnarf, and webspy passively monitor a network for interesting data (passwords, e-mail, files, etc.). arpspoof, dnsspoof, and macof facilit ate the interception of network traffic. * IP Spoofing thus, can be said to be the proce ss by which you change or rather spoof your IP Address, so as to fool the tar get system into believing that your identity is not the one, which is actuall y yours, but make it believe that you actually are the computer having the spoof ed address. IP Spoofed Attack is a Blind Attack *

Page 421: Linux-Training-Volume1

What we mean by a blind attack is that, we do not get any messages or any feedback regarding our progress. * When an attacker is trying to perform IP Spoo fing, then, there is no mechanism which tells him, whether he has been succ essful or not, if yes then by what extent or if no then what when wrong. * Taking the assumptions made earlier, we can e xplain this problem in the following manner: * The main problem with IP Spoofing is that eve n if you (REAL) are able to send a spoofed datagram to the remote host (VICTIM) , making it believe that the datagram came from FAKE, then the remote host (VICT IM) will reply to the spoofed IP Address (FAKE) and not your real IP Address (REA L), thus, as a result, REAL does not get any feedback whatsoever, regarding his progress. * The following is the explanation of the blind nature of IP Spoofing, using the concept of the three-way handshake, which has t o take place, each time a TCP/IP connection is established. * If REAL wants to establish a TCP/IP connectio n with VICTIM, without spoofing of any IP Address, then typically the thre e way handshake would take place as follows: 1. REAL sends a SYN packet to VICTIM. 2. VICTIM sends back a SYN/ACK packet to REAL. 3. REAL acknowledges this by replying with a SYN pa cket. * However, if REAL wants to spoof his IP Addres s and make it appear to be FAKE, then the following will take place: 1.

Page 422: Linux-Training-Volume1

REAL sends a SYN packet to VICTIM , but this time with the source address being FAKE. 2. VICTIM sends back a SYN/ACK packe t to FAKE. There is no way that REAL can determine when and if VICTIM has actu ally replied with a SYN/ACK addressed to FAKE. This is the blind part and REAL just has to let some time pass (once it has sent a SYN packet to VICTIM) and assume that by then VICTIM must have sent a SYN/ACK to FAKE. 3. After some time has passed, REAL then has to send a SYN packet to VICTIM acknowledging that FAKE has received the SYN/ACK packet. (Assuming that it indeed has.) Measures to prevent IP Spoofed Attacks: * Avoid using the source address authentication . Implement cryptographic authentication systemwide. * Configure your network to reject packets from the InterNet that claim to originate from a local address. This is most common ly done with a router or using a firewall like APF or Bastille. * If you allow outside connections from trusted hosts, enable encryption sessions at the router. * Spoofed attacks are very dangerous and diffic ult to detect. They are becoming more and more popular now. * The only way to prevent these attacks are to implement security measures like encrypted authentication to secure your networ k. 11.5.2. TCP Wrappers and xinetd *

Page 423: Linux-Training-Volume1

Controlling access to network services is one of the most important security tasks facing a server administrator. * Fortunately, under Red Hat Linux there are a number of tools which do just that. For instance, an iptables-based firewall filt ers out unwelcome network packets within the kernel's network stack. * For network services that utilize it, TCP wra ppers add an additional layer of protection by defining which hosts are allowed o r not allowed to connect to "wrapped" network services. * One such wrapped network service is the xinet d super server. This service is called a super server because it controls connec tions to a subset of network services and further refines access control. * Figure below is a basic illustration of how t hese tools work together to protect network services. 11.5.2.1). Controlling DOS Attacks Via Xinetd The xinetd daemon can add a basic level of protecti on from a Denial of Service (DoS) attacks. Below is a list of directives which can be used in /etc/xinetd.conf that aid in limiting the effectiveness of such attacks: * per_source — Defines the maximum number of instances for a service per source IP address. It accepts only integers as an a rgument and can be used in both xinetd.conf and in the service-specific config uration files in the xinetd.d/ directory. * cps — Defines the maximum number of connect ions per second. This directive takes two integer arguments separated by white space. The first is the

Page 424: Linux-Training-Volume1

maximum number of connections allowed to the servic e per second. The second is the number of seconds xinetd must wait before re-en abling the service. It accepts only integers as an argument and can be use d in both xinetd.conf and in the service-specific configuration files in the xin etd.d/ directory. * max_load — Defines the CPU usage threshold for a service. It accepts a floating point number argument. 11.5.3. SATAN, ISS, and Other Network Scanners * There are a number of different software pack ages out there that do port and service-based scanning of machines or networks. * SATAN, ISS, SAINT, and Nessus are some of the more well-known ones. This software connects to the target machine (or all the target machines on a network) on all the ports they can, and try to dete rmine what service is running there. * Based on this information, you can tell if th e machine is vulnerable to a specific exploit on that server. * SATAN (Security Administrator's Tool for Anal yzing Networks) is a port scanner with a web interface. It can be configured to do light, medium, or strong checks on a machine or a network of machines . It's a good idea to get SATAN and scan your machine or network, and fix the problems it finds. Make sure you get the copy of SATAN from metalab or a reputab le FTP or web site * ISS (Internet Security Scanner) is another po rt-based scanner. It is faster than Satan, and thus might be better for lar ge networks. However, SATAN tends to provide more information. * SAINT is a updated version of SATAN. It is we b-based and has many more up-to-date tests than SATAN. You can find out more abo ut it at: http://www.wwdsi.com/~saint

Page 425: Linux-Training-Volume1

* Nessus is a free security scanner. It has a G TK graphical interface for ease of use. It is also designed with a very nice p lug in setup for new port-scanning tests. For more information, take a look at: http://www.ne ssus.org 11.5.3.1). Detecting Port Scans * There are some tools designed to alert you to probes by SATAN and ISS and other scanning software on your server in case an i ntruder is trying to exploit your machine. * However, if you liberally use tcp_wrappers, a nd look over your log files /var/log/messages regularly, you should be able to notice such probes. * Even on the lowest setting, SATAN still leave s traces in the logs on a stock Red Hat system. * There are also "stealth" port scanners. A pac ket with the TCP ACK bit set (as is done with established connections) will like ly get through a packet-filtering firewall. The returned RST packet from a port that _had no established session_ can be taken as proof of life on that port . The TCP wrappers will not detect this. 11.5.4. Securing SSH * Many network services like telnet, rlogin, an d rsh are vulnerable to eavesdropping which is one of several reasons why S SH should be used instead. * Red Hat's default configuration for SSH meets the security requirements for most environments. However, a few security twea king that can be done are as follows: *

Page 426: Linux-Training-Volume1

/etc/ssh/sshd_config: It is advisable to disa ble direct root login at the SSH layer as well by setting the parameter below in the ssh configuration file mentioned. PermitRootLogin no * You may also disable TCP forwarding and sftp if you don't use it: AllowTcpForwarding no #Subsystem sftp /usr/lib/misc/sftp-server * Since SSH protocol version 1 is not as secure as Protocol 2, you may want to limit the protocol to version 2 only by setting the following parameter: Protocol 2 * After changing any parameter, make sure to re start sshd: $ /etc/rc.d/init.d/sshd restart 11.5.5. Securing NFS NFS (Network File System) allows servers to share f iles over a network. But like all network services using NFS involves risk. Here are some basic rules: * NFS should not be enabled if not needed.

Page 427: Linux-Training-Volume1

* If you must use NFS, use TCP wrapper to restr ict remote access. * Make sure you export to only those machines t hat you really need to. * Use fully qualified domain names to diminish spoofing attempts * Export only directories you need to export. * Export read-only wherever possible. * Use NFS over TCP. * If you don't have shared directories to expor t, ensure that the NFS service is NOT enabled and running: $ service nfs status rpc.mountd is stopped nfsd is stopped rpc.rquotad is stopped $ chkconfig --list nfs nfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off * You probably don't need the portmap service a s well which is used by NFS (the portmap daemon registers rpc-based services fo r services like NFS, NIS, etc.):

Page 428: Linux-Training-Volume1

$ service portmap status portmap is stopped $ chkconfig --list portmap portmap 0:off 1:off 2:off 3:off 4:off 5:off 6:o ff 11.5.5.1). Restricting Incoming NFS Requests * A recommended security-strategy is to block a ll incoming requests by default, but allow specific hosts or networks to co nnect. * The portmap program and some of the NFS progr ams include a built-in TCP wrapper. To verify if a program includes a TCP wrap per, you can run the following commands: $ strings /sbin/portmap | egrep "hosts.deny|hos ts.allow|libwrap" hosts_allow_table hosts_deny_table /etc/hosts.allow /etc/hosts.deny $ strings /usr/sbin/rpc.rquotad | egrep "hosts. deny|hosts.allow|libwrap" libwrap.so.0 $ ldd /usr/sbin/rpc.rquotad | grep libwrap libwrap.so.0 => /usr/lib/libwrap.so.0 (0x008740 00) * If hosts.deny and hosts.allow are displayed, or if libwrap is displayed, then the program includes a built-in TCP wrapper. I f none of these strings are displayed, then adding the program name to /etc/hos ts.deny and /etc/hosts.allow will have no effect.

Page 429: Linux-Training-Volume1

* To block all incoming requests by default, ad d the following line to /etc/hosts.deny if you have not done so yet: ALL: ALL * Verify from a remote server that portmapper d oes not list any registered RPC programs: $ rpcinfo -p <server> No remote programs registered. * To allow NFS requests from e.g. servers serve r1, server2, server3 and from the .subnet.example.com network, the configuration in /etc/hosts.allow would look like as follows: portmap: rac1pub rac2pub rac3pub .subnet.exampl e.com rpc.mountd: rac1pub rac2pub rac3pub .subnet.exa mple.com rpc.rquotad: rac1pub rac2pub rac3pub .subnet.ex ample.com * For portmapper you can now test access from t rusted servers or networks using the rpcinfo command: $ rpcinfo -p <server> program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 607 rquotad

Page 430: Linux-Training-Volume1

100011 2 udp 607 rquotad 100011 1 tcp 610 rquotad 100011 2 tcp 610 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100005 1 udp 623 mountd 100005 1 tcp 626 mountd 100005 2 udp 623 mountd 100005 2 tcp 626 mountd 100005 3 udp 623 mountd 100005 3 tcp 626 mountd * If you run it from an "untrusted" server or n etwork, you should get the following output: $ rpcinfo -p <server> No remote programs registered. 11.5.6. Kernel Tunable Security Parameters The following section discusses tunable kernel para meters that you can use to secure your Linux server against attacks. * For each tunable kernel parameters we will di scuss the entry that needs to be added to the /etc/sysctl.conf configuration file to make the change permanent after reboots. * To activate the configured kernel parameters immediately at runtime, use:

Page 431: Linux-Training-Volume1

$ sysctl -p 11.5.6.1). Enable TCP SYN Cookie Protection * A "SYN Attack" is a denial of service attack that consumes all the resources on a machine. Any server that is connecte d to a network is potentially subject to this attack. * To enable TCP SYN Cookie Protection, edit the /etc/sysctl.conf file and add the following line: net.ipv4.tcp_syncookies = 1 11.5.6.2). Disable IP Source Routing * Source Routing is used to specify a path or r oute through the network from source to destination. This feature can be used by network people for diagnosing problems. * However, if an intruder was able to send a so urce routed packet into the network, then he could intercept the replies and yo ur server might not know that it's not communicating with a trusted server. * To enable Source Route Verification, edit the /etc/sysctl.conf file and add the following line: net.ipv4.conf.all.accept_source_route = 0 11.5.6.3). Disable ICMP Redirect Acceptance * ICMP redirects are used by routers to tell th e server that there is better path to other networks than the one chosen by the s erver.

Page 432: Linux-Training-Volume1

* However, an intruder could potentially use IC MP redirect packets to alter the hosts's routing table by causing traffic to use a path you didn't intend. * To disable ICMP Redirect Acceptance, edit the /etc/sysctl.conf file and add the following line: net.ipv4.conf.all.accept_redirects = 0 11.5.6.4). Enable IP Spoofing Protection * IP spoofing is a technique where an intruder sends out packets which claim to be from another host by manipulating the source address. * IP spoofing is very often used for denial of service attacks. * To enable IP Spoofing Protection, turn on Sou rce Address Verification. Edit the /etc/sysctl.conf file and add the followin g line: net.ipv4.conf.all.rp_filter = 1 11.5.6.5). Enable Ignoring to ICMP Requests * If you want or need Linux to ignore ping requ ests, edit the /etc/sysctl.conf file and add the following line: net.ipv4.icmp_echo_ignore_all = 1 * This may not be possible in many environments . 11.5.6.6). Enable Ignoring Broadcasts Request

Page 433: Linux-Training-Volume1

* If you want or need Linux to ignore broadcast requests, edit the /etc/sysctl.conf file and add the following line: net.ipv4.icmp_echo_ignore_broadcasts = 1 11.5.6.7). Enable Bad Error Message Protection * To alert you about bad error messages in the network, edit the /etc/sysctl.conf file and add the following line: net.ipv4.icmp_ignore_bogus_error_responses = 1 11.5.6.8).Enable Logging of Spoofed/Source Routed/R edirect Packets * To turn on logging for Spoofed Packets, Sourc e Routed Packets, and Redirect Packets, edit the /etc/sysctl.conf file an d add the following line: net.ipv4.conf.all.log_martians = 1 References for Kernel Tunable Parameters http://www.linuxsecurity.com/content/view/111337/65 / http://www.linuxexposed.com/internal.php?op=modload &name=News&file=article&sid=550 332 CopyRight @ 2005 EduCARMA