8/3/2019 Copy (4) of Multi-threading
1/168
1
Rajkumar Buyya
School of Computer Science and Software EngineeringMonash Technology
Melbourne, Australia
Email: [email protected]
URL: http://www.dgs.monash.edu.au/~rajkumar
Concurrent Programming with Threads
8/3/2019 Copy (4) of Multi-threading
2/168
2
Objectives
Explain the parallel computing right from architecture,OS, programming paradigm, andapplications
Explain the multithreading paradigm, andallaspects
of how to use it in an application
Coverall basic MT concepts
Explore issues related to MT
Contrast Solaris, POSIX, Java threads
Lookat the APIs in detail
Examine some Solaris, POSIX, and Java codeexamples
Debate on: MPP and Cluster Computing
8/3/2019 Copy (4) of Multi-threading
3/168
3
Agenda
Overview of Computing
Operating Systems Issues Threads Basics Multithreading with Solaris and POSIX threads
Multithreading in Java Distributed Computing Grand Challenges
Solaris, POSIX, and Java example code
8/3/2019 Copy (4) of Multi-threading
4/168
4
P PP P P P
Microkernel
Multi-Processor Computing System
Threads Interface
Hardware
Operating System
ProcessProcessor ThreadP
Applications
Computing Elements
Programming paradigms
8/3/2019 Copy (4) of Multi-threading
5/168
5
ArchitecturesCompilers
Applications
P.S.EsArchitectures
Compilers
Applications
P.S.Es
Sequential
Era
Parallel
Era
1940 50 60 70 80 90 2000 2030
Two Eras of Computing
Commercialization
R & D Commodity
8/3/2019 Copy (4) of Multi-threading
6/168
6
History of Parallel Processing
PP can be traced to a tablet dated
around 100 BC. Tablet has 3 calculating positions.
Infer that multiple positions:
Reliability/ Speed
8/3/2019 Copy (4) of Multi-threading
7/168
7
Motivating Factors
d d dJust as we learned to fly, not byconstructing a machine that flaps itswings like birds, but by applying
aerodynamics principles demonstrated
by nature...
We modeled PP after those of biological species.
8/3/2019 Copy (4) of Multi-threading
8/168
8
Aggregated speed withwhich complex calculations
carried out by individual neurons
response is slow (ms) - demonstratefeasibility of PP
Motivating Factors
8/3/2019 Copy (4) of Multi-threading
9/168
9
Why Parallel Processing?
Computation requirements are everincreasing -- visualization, distributed
databases, simulations, scientificprediction (earthquake), etc..
Sequential architectures reachingphysical limitation (speed of light,thermodynamics)
8/3/2019 Copy (4) of Multi-threading
10/168
10
Technical Computing
Solving technology problems using
computer modeling,simulation and analysis
Life SciencesLife Sciences
Mechanical Design & Analysis (CAD/CAM)Mechanical Design & Analysis (CAD/CAM)
AerospaceAerospace
GeographicInformation
Systems
GeographicInformation
Systems
8/3/2019 Copy (4) of Multi-threading
11/168
11
No. of Processors
C.P.I.
1 2 . . . .
Computational Power Improvement
Multiprocessor
Uniprocessor
8/3/2019 Copy (4) of Multi-threading
12/168
12
Age
Growth
5 10 15 20 25 30 35 40 45 . . . .
Computational Power Improvement
Vertical Horizontal
8/3/2019 Copy (4) of Multi-threading
13/168
13
The Tech. of PP is mature and can beexploited commercially; significant
R & D work on development of tools& environment.
Significant development inNetworking technology is paving away for heterogeneous computing.
Why Parallel Processing?
8/3/2019 Copy (4) of Multi-threading
14/168
14
Hardware improvements likePipelining, Superscalar, etc., are non-
scalable and requires sophisticatedCompiler Technology.
Vector Processing works well forcertain kind of problems.
Why Parallel Processing?
8/3/2019 Copy (4) of Multi-threading
15/168
15
Parallel Program has & needs ...
Multiple processes active simultaneously
solving a given problem, general multipleprocessors.
Communication and synchronization of its
processes (forms the core of parallel
programming efforts).
8/3/2019 Copy (4) of Multi-threading
16/168
16
Processing Elements Architecture
8/3/2019 Copy (4) of Multi-threading
17/168
17
Simple classification by Flynn:(No. of instruction and data streams)
SISD - conventional
SIMD - data parallel, vector computing
MISD - systolic arrays
MIMD - very general, multiple approaches.
Current focus is on MIMD model, usinggeneral purpose processors.
(No shared memory)
Processing Elements
8/3/2019 Copy (4) of Multi-threading
18/168
18
SISD : A Conventional Computer
Speed is limited by the rate at which computer cantransfer information internally.
ProcessorData Input Data Output
Instructions
Ex:PC, Macintosh, Workstations
8/3/2019 Copy (4) of Multi-threading
19/168
19
The MISDArchitecture
More of an intellectual exercise than apractical configuration.Few built, but commercially not available
Data
Input
Stream
Data
Output
Stream
Processor
A
Processor
B
ProcessorC
Instruction
StreamA
Instruction
Stream B
Instruction Stream C
8/3/2019 Copy (4) of Multi-threading
20/168
20
SIMD Architecture
Ex: CRAY machine vectorprocessing, Thinking machine cm*
Ci
8/3/2019 Copy (4) of Multi-threading
21/168
21
Unlike SISD, MISD, MIMD computer works asynchronously.
Shared memory (tightly coupled) MIMD
Distributed memory (loosely coupled) MIMD
MIMD Architecture
Processor
A
Processor
B
Processor
C
Data Input
streamA
Data Input
stream B
Data Input
stream C
Data Output
streamA
Data Outputstream B
Data Output
stream C
Instruction
Stream AInstruction
Stream BInstruction
Stream C
8/3/2019 Copy (4) of Multi-threading
22/168
22
M
E
M
O
R
Y
B
U
S
Shared Memory MIMD machine
Comm: Source PE writes data to GM & destination retrieves it
Easy to build, conventional OSes of SISD can be easily be ported
Limitation : reliability & expandability. A memory component oranyprocessor failure affects the whole system.
Increase of processors leads to memory contention.
Ex. : Silicon graphics supercomputers....
M
E
M
O
R
Y
B
U
S
Global Memory System
ProcessorA
ProcessorB
ProcessorC
M
E
M
O
R
Y
B
U
S
8/3/2019 Copy (4) of Multi-threading
23/168
23
M
EM
O
R
Y
BU
S
Distributed Memory MIMD
Communication : IPC on High Speed Network.
Network can be configured to ... Tree, Mesh, Cube, etc.
Unlike Shared MIMD
easily/readily expandable
Highlyreliable (any CPU failure does not affect the whole system)
Processor
A
Processor
B
Processor
C
M
E
M
O
R
Y
BU
S
M
E
M
O
R
Y
BU
S
Memory
System A
Memory
System B
Memory
System C
IPC
channel
IPC
channel
8/3/2019 Copy (4) of Multi-threading
24/168
24
Laws of caution.....
Speed of computers is proportional to the squareof their cost.
i.e.. cost = Speed
Speedup by a parallel computer increases as the
logarithm of the number of processors.
S
P
C
S
(speed = cost2)
Speedup = log2(no. of processors)
8/3/2019 Copy (4) of Multi-threading
25/168
25
Caution....
Very fast development in PP and related area
have blurred concept boundaries, causing lot of
terminological confusion : concurrent computing/
programming, parallel computing/ processing,
multiprocessing, distributed computing, etc..
8/3/2019 Copy (4) of Multi-threading
26/168
26
Its hard to imagine a fieldthat changes as rapidly as
computing.
8/3/2019 Copy (4) of Multi-threading
27/168
27
Computer Science is an Immature Science.
(lack of standard taxonomy, terminologies)
Caution....
8/3/2019 Copy (4) of Multi-threading
28/168
28
There is no strict delimiters forcontributors to the area of parallel
processing : CA, OS, HLLs, databases,
computer networks, all have a role to
play.
This makes it a Hot Topic of Research
Caution....
8/3/2019 Copy (4) of Multi-threading
29/168
29
Parallel Programming Paradigms
Multithreading
Tasklevelparallelism
8/3/2019 Copy (4) of Multi-threading
30/168
30
Serial Vs. Parallel
Q
Please
COUNTER
COUNTER 1
COUNTER 2
8/3/2019 Copy (4) of Multi-threading
31/168
31
High Performance Computing
Parallel Machine : MPP
function1( )
{//......function stuff
}
function2( )
{//......function stuff
}
Serial Machine
function1 ( ):
function2 ( ):
Single CPU
Time : add (t1, t2)
function1( ) || function2 ( )
massively parallel system
containing thousands of CPUs
Time : max (t1, t2)
t1
t2
8/3/2019 Copy (4) of Multi-threading
32/168
32
Single and MultithreadedProcesses
Single-threaded Process
Single instruction stream Multiple instruction stream
Multiplethreaded Process
Threads of
Execution
Common
Address Space
8/3/2019 Copy (4) of Multi-threading
33/168
33
OS:Multi-Processing, Multi-Threaded
Application
Application Application
Application
CPU
Better Response Times inMultiple ApplicationEnvironments
Higher Throughput forParallelizeable Applications
CPU
CPU
CPU CPU CPU
Threaded Libraries, MultiThreaded Libraries, Multi--threaded I/Othreaded I/O
8/3/2019 Copy (4) of Multi-threading
34/168
34
Multi-threading, continued...
Multi-threaded OS enables parallel, scalable I/O
Application
CPU CPU CPU
Application
Application
OS Kernel Multiple, independent I/Orequests can be satisfiedsimultaneously because all themajor disk, tape, and network
drivers have been multi-threaded, allowing any givendriver to run on multipleCPUs simultaneously.
8/3/2019 Copy (4) of Multi-threading
35/168
35
Sharedmemory
segments,pipes, open
files ormmapd
files
Sharedmemory
segments,pipes, open
files ormmapd
files
Basic Process Model
DATADATA
STACK
TEXTTEXT
DATADATA
STACK
TEXTTEXT
processesprocesses
Shared Memory
maintained by kernel
Shared Memory
maintained by kernel processesprocesses
8/3/2019 Copy (4) of Multi-threading
36/168
36
What are Threads?
Thread is apiece of code that can execute inconcurrence with otherthreads.
It is a schedule entity on aprocessor
Local state Global/ shared state PC Hard/Software Context
Registers
HardwareContext
Status Word
Program Counter
Running Thread Object
8/3/2019 Copy (4) of Multi-threading
37/168
37
Threaded Process Model
THREADSTACKTHREADSTACK
THREAD
DATA
THREAD
DATA
THREADTEXT
THREADTEXT
SHAREDMEMORYSHAREDMEMORY
Threads within a process
Independent executables
All threads are parts of a process hence communication
easier and simpler.
8/3/2019 Copy (4) of Multi-threading
38/168
38
Code-GranularityCode Item
Large grain
(task level)
Program
Medium grain
(control level)
Function (thread)
Fine grain(data level)
Loop
Very fine grain
(multiple issue)
With hardware
Code-GranularityCode Item
Large grain
(task level)
Program
Medium grain
(control level)
Function (thread)
Fine grain(data level)
Loop
Very fine grain
(multiple issue)
With hardware
Levels of ParallelismLevels of Parallelism
Task i-l Task i Task i+1
func1 ( ){
....
....
}
func2 ( ){
....
....
}
func3 ( ){
....
....
}
a ( 0 ) =..
b ( 0 ) =..
a ( 1 )=..
b ( 1 )=..
a ( 2 )=..
b ( 2 )=..
+ x Load
Task Control Data Multiple Issue
Task Control Data Multiple Issue
8/3/2019 Copy (4) of Multi-threading
39/168
39
Simple ThreadExampleSimple ThreadExample
void *func ( ){
/* define local data */- - - - - - - - - - -- - - - - - - - - - - /* function code */- - - - - - - - - - -
thr_exit(exit_value);}
main ( ){
thread_t tid;
int exit_value;- - - - - - - - - - -thread_create (0, 0, func (), NULL, &tid);- - - - - - - - - - -thread_join (tid, 0, &exit_value);- - - - - - - - - - -
}
void *func ( ){
/* define local data */- - - - - - - - - - -- - - - - - - - - - - /* function code */- - - - - - - - - - -
thr_exit(exit_value);}
main ( ){
thread_t tid;
int exit_value;- - - - - - - - - - -thread_create (0, 0, func (), NULL, &tid);- - - - - - - - - - -thread_join (tid, 0, &exit_value);- - - - - - - - - - -
}
8/3/2019 Copy (4) of Multi-threading
40/168
40
Few Popular Thread ModelsFew Popular Thread Models
POSIX, ISO/IEEE standardMach C threads, CMU
Sun O S LWP threads, Sun Microsystems
PARAS CORE threads, C-DACJava-Threads, Sun Microsystems
Chorus threads, Paris
OS/2 threads, IBMWindows NT/95 threads, Microsoft
POSIX, ISO/IEEE standardMach C threads, CMU
Sun O S LWP threads, Sun Microsystems
PARAS CORE threads, C-DACJava-Threads, Sun Microsystems
Chorus threads, Paris
OS/2 threads, IBMWindows NT/95 threads, Microsoft
8/3/2019 Copy (4) of Multi-threading
41/168
8/3/2019 Copy (4) of Multi-threading
42/168
42
MultithreadingMultithreading --MultiprocessorsMultiprocessorsMultithreadingMultithreading --MultiprocessorsMultiprocessors
Concurrency Vs ParallelismConcurrency Vs ParallelismConcurrency Vs ParallelismConcurrency Vs Parallelism
P1P1
P2P2
P3P3
timetime
No of execution process = no of CPUsNo of execution process = no of CPUs
CPU
CPU
CPU
8/3/2019 Copy (4) of Multi-threading
43/168
43
Computational ModelComputational Model
ParallelExecution due to :
Concurrency of threads on Virtual Processors
Concurrency of threads on Physical Processor
True Parallelism :
threads : processor map = 1:1
ParallelExecution due to :
Concurrency of threads on Virtual Processors
Concurrency of threads on Physical Processor
True Parallelism :
threads : processor map = 1:1
User Level Threads
Virtual Processors
Physical Processors
User-Level Schedule (User)
Kernel-Level Schedule (Kernel)
8/3/2019 Copy (4) of Multi-threading
44/168
44
General Architecture ofThread Model
General Architecture ofThread Model
Hides the details of machinearchitecture
Maps User Threads to kernelthreads
Process VM is shared, state changein VM by one thread visible toother.
Hides the details of machinearchitecture
Maps User Threads to kernelthreads
Process VM is shared, state changein VM by one thread visible toother.
8/3/2019 Copy (4) of Multi-threading
45/168
45
Process ParallelismProcess Parallelism
int add (int a, int b, int & result)
// function stuff
int sub(int a, int b, int & result)
// function stuff
int add (int a, int b, int & result)
// function stuff
int sub(int a, int b, int & result)
// function stuff
pthread t1, t2;
pthread-create(&t1, add, a,b, & r1);
pthread-create(&t2, sub, c,d, & r2);
pthread-par (2, t1, t2);
MISD and MIMD ProcessingMISD and MIMD Processing
a
b
r1
cd
r2
add
sub
Processor
Data
IS1
IS2
Processor
8/3/2019 Copy (4) of Multi-threading
46/168
46
do
dn/2
dn2/+1
dn
Sort
Data
IS
Data ParallelismData Parallelism
sort( int *array, int count)
//......
//......
sort( int *array, int count)
//......
//......
pthread-t, thread1, thread2;
pthread-create(& thread1, sort, array, N/2);
pthread-create(& thread2, sort, array, N/2);
pthread-par(2, thread1, thread2);
SIMD ProcessingSIMD Processing
Sort
Processor
Processor
8/3/2019 Copy (4) of Multi-threading
47/168
47
PurposePurpose Threads
Model
Threads
Model
Process
Model
Process
Model
Start execution of a new threadStart execution of a new thread
Creation of a new threadCreation of a new thread
Wait for completion ofthreadWait for completion ofthread
Exit and destroy thethreadExit and destroy thethread
thr_join()thr_join()wait( )wait( )
exec( )exec( )
exit( )exit( )
fork ( )fork ( )
[ thr_create() buildsthe new thread andstarts the execution
[ thr_create() buildsthe new thread andstarts the execution
thr_create( )thr_create( )
thr_exit()thr_exit()
Process and Threaded modelsProcess and Threaded models
8/3/2019 Copy (4) of Multi-threading
48/168
48
Code ComparisonCode Comparison
Segment (Process)
main ( )
{
fork ( );
fork ( );
fork ( );
}
Segment (Process)
main ( )
{
fork ( );
fork ( );
fork ( );
}
Segment(Thread)
main()
{
thread_create(0,0,func(),0,0);
thread_create(0,0,func(),0,0);
thread_create(0,0,func(),0,0);
}
Segment(Thread)
main()
{
thread_create(0,0,func(),0,0);
thread_create(0,0,func(),0,0);
thread_create(0,0,func(),0,0);
}
8/3/2019 Copy (4) of Multi-threading
49/168
49
PrintingThreadPrintingThread
EditingThreadEditingThread
8/3/2019 Copy (4) of Multi-threading
50/168
50
Independent ThreadsIndependent Threadsprinting()
{
- - - - - - - - - - - -
}
editing()
{
- - - - - - - - - - - -}
main()
{
- - - - - - - - - - - -
id1 = thread_create(printing);
id2 = thread_create(editing);
thread_run(id1, id2);
- - - - - - - - - - - -
}
printing()
{
- - - - - - - - - - - -
}
editing()
{
- - - - - - - - - - - -}
main()
{
- - - - - - - - - - - -
id1 = thread_create(printing);
id2 = thread_create(editing);
thread_run(id1, id2);
- - - - - - - - - - - -
}
8/3/2019 Copy (4) of Multi-threading
51/168
51
Cooperative threads - File CopyCooperative threads - File Copy
reader()
{
- - - - - - - - --
lock(buff[i]);
read(src,buff[i]);
unlock(buff[i]);
- - - - - - - - --
}
reader()
{
- - - - - - - - --
lock(buff[i]);
read(src,buff[i]);
unlock(buff[i]);
- - - - - - - - --
}
writer()
{
- - - - - - - - - -
lock(buff[i]);
write(src,buff[i]);
unlock(buff[i]);
- - - - - - - - - -
}
writer()
{
- - - - - - - - - -
lock(buff[i]);
write(src,buff[i]);
unlock(buff[i]);
- - - - - - - - - -
}
buff[0]
buff[1]
Cooperative Parallel SynchronizedT
hreads
Cooperative Parallel SynchronizedT
hreads
8/3/2019 Copy (4) of Multi-threading
52/168
52
RPC CallRPC Call
func(){
/* Body */}
func(){
/* Body */}
RPC(func)RPC(func)
................
ClientClient
ServerServer
8/3/2019 Copy (4) of Multi-threading
53/168
53
Server
Threads
Message Passing
Facility
Server ProcessClient Process
Client Process
User Mode
Kernel Mode
Multithreaded Server
8/3/2019 Copy (4) of Multi-threading
54/168
54
Compiler
Thread
Preprocessor
Thread
Multithreaded Compiler
Source
Code
Object
Code
8/3/2019 Copy (4) of Multi-threading
55/168
55
Thread Programming models
1. The boss/worker model
2. The peer model
3. A threadpipeline
8/3/2019 Copy (4) of Multi-threading
56/168
56
taskX
taskY
taskZ
main ( )
WorkersProgram
Files
Resources
Databases
Disks
SpecialDevices
Boss
Input (Stream)
The boss/worker model
E l
8/3/2019 Copy (4) of Multi-threading
57/168
57
Example
main() /* the boss */
{
forever {
get a request;
switch( request )
case X: pthread_create(....,taskX);
case X: pthread_create(....,taskX);
....
}
}taskX() /* worker */
{
perform the task, sync if accessing shared resources
}
taskY() /* worker */
{
perform the task, sync if accessing shared resources
}
....
--Above runtime overhead of creating thread can be solved by threadpool
* the boss thread creates all worker thread at program initialization
and each worker thread suspends itself immediately for a wakeup call
from boss
8/3/2019 Copy (4) of Multi-threading
58/168
58
The peer model
taskX
taskY
WorkersProgram
Files
Resources
Databases
Disks
SpecialDevices
taskZ
Input(static)
E l
8/3/2019 Copy (4) of Multi-threading
59/168
59
Example
main()
{
pthread_create(....,thread1...task1);
pthread_create(....,thread2...task2);
....
signal all workers to start
wait for all workers to finish
do any cleanup
}
}task1() /* worker */
{
wait for start
perform the task, sync if accessing shared resources
}
task2() /* worker */
{
wait for start
perform the task, sync if accessing shared resources
}
8/3/2019 Copy (4) of Multi-threading
60/168
60
A threadpipeline
Resources Files
Databases
Disks
Special Devices
Files
Databases
Disks
Special Devices
Files
Databases
Disks
Special Devices
Stage 1 Stage 2 Stage 3
Program Filter Threads
Input (Stream)
Example
8/3/2019 Copy (4) of Multi-threading
61/168
61
Examplemain()
{
pthread_create(....,stage1);
pthread_create(....,stage2);....
wait for all pipeline threads to finish
do any cleanup
}
stage1() {
get next input for the program
do stage 1 processing of the input
pass result to next thread in pipeline
}
stage2(){
get input fromprevious thread in pipeline
do stage 2 processing of the input
pass result to next thread in pipeline}
stageN()
{
get input fromprevious thread in pipeline
do stage N processing of the input
pass result to program output.
}
Multithreaded Matrix Multiply
8/3/2019 Copy (4) of Multi-threading
62/168
62
Multithreaded Matrix Multiply...
X
A
=
B C
C[1,1] = A[1,1]*B[1,1]+A[1,2]*B[2,1]..
.
C[m,n]=sum of product of corresponding elements in row ofA and column of B.
Each resultant element can be computed independently.
Multithreaded Matrix Multiply
8/3/2019 Copy (4) of Multi-threading
63/168
63
Multithreaded Matrix Multiplytypedef struct {
int id; int size;
int row, column;
matrix *MA, *MB, *MC;} matrix_work_order_t;
main()
{
int size = ARRAY_SIZE, row, column;
matrix_t MA, MB,MC;
matrix_work_order *work_orderp;
pthread_t peer[size*zize];
...
/* process matrix, by row, column */
for( row = 0; row < size; row++ )
for( column = 0; column < size; column++)
{
id = column + row * ARRAY_SIZE;work_orderp = malloc( sizeof(matrix_work_order_t));
/* initialize all members if wirk_orderp */
pthread_create(peer[id], NULL, peer_mult, work_orderp);
} }
/* wait for all peers to exist*/ for( i =0; i < size*size;i++)
pthread_join( peer[i], NULL );
}
Multithreaded Server
8/3/2019 Copy (4) of Multi-threading
64/168
64
Multithreaded Server...
void main( int argc, char *argv[] )
{
int server_socket, client_socket, clilen;
struct sockaddr_in serv_addr, cli_addr;
int one, port_id;
#ifdef _POSIX_THREADS
pthread_t service_thr;
#endif
port_id = 4000; /* default port_id */
if( (server_socket = socket( AF_INET, SOCK_STREAM, 0 )) < 0 )
{
printf("Error: Unable to open socket in parmon server.\n");
exit( 1 );
}memset( (char*) &serv_addr, 0, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons( port_id );
setsockopt(server_socket, SOL_SOCKET, SO_REUSEADDR, (char *)&one,
sizeof(one));
Multithreaded Server
8/3/2019 Copy (4) of Multi-threading
65/168
65
Multithreaded Server...
if( bind( server_socket, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0 )
{
printf( "Error: Unable to bind socket in parmon server->%d\n",errno );
exit( 1 );
}
listen( server_socket, 5);
while( 1 )
{
clilen = sizeof(cli_addr);
client_socket = accept( server_socket, (struct sockaddr *)&serv_addr, &clilen );
if( client_socket < 0 )
{ printf( "connection to client failed in server.\n" ); continue;
}
#ifdef POSIX_THREADSpthread_create( &service_thr, NULL, service_dispatch, client_socket);
#else
thr_create(NULL, 0, service_dispatch, client_socket, THR_DETACHED, &service_thr);
#endif
}
}
Multithreaded Server
8/3/2019 Copy (4) of Multi-threading
66/168
66
Multithreaded Server
// Service function -- Thread Funtion
void *service_dispatch(int client_socket)
{
Get USER Request
if( readline( client_socket, command, 100 ) > 0 )
{
IDENTI|FY USER REQUEST
.Do NECESSARY Processing
..Send Results to Server
}
CLOSE Connect and Terminate THREAD
close( client_socket );
#ifdef POSIX_THREA
DSpthread_exit( (void *)0);
#endif
}
8/3/2019 Copy (4) of Multi-threading
67/168
67
The Value of MT
Program structure
Parallelism
Throughput
Responsiveness
System resource usage
Distributed objects Single source across platforms (POSIX)
Single binary for any number of CPUs
8/3/2019 Copy (4) of Multi-threading
68/168
68
To thread or not to threadTo thread or not to thread
Improve efficiency on uniprocessorsystems
Use multiprocessorHardware
Improve Throughput
Simple to implement Asynchronous I/O
Leverage special features of the OS
Improve efficiency on uniprocessorsystems
Use multiprocessorHardware
Improve Throughput
Simple to implement Asynchronous I/O
Leverage special features of the OS
h d h dh d h d
8/3/2019 Copy (4) of Multi-threading
69/168
69
To thread or not to threadTo thread or not to thread
If all operations are CPU intensive donot go far on multithreading
Thread creation is very cheap, it isnot free
thread that has only five lines of codewould not be useful
If all operations are CPU intensive donot go far on multithreading
Thread creation is very cheap, it isnot free
thread that has only five lines of codewould not be useful
8/3/2019 Copy (4) of Multi-threading
70/168
70
DOS - The Minimal OS
User
Space
Kernel
Space
DOS
Data
Stack & Stack Pointer Program Counter
User
Code
Global
Data
DOSCode
Hardware
DOS
8/3/2019 Copy (4) of Multi-threading
71/168
71
Multitasking OSs
Process
User
Space
Kernel
Space
Hardware
UNIX
Process Structure
(UNIX, VMS, MVS, NT, OS/2 etc.)
8/3/2019 Copy (4) of Multi-threading
72/168
72
Multitasking Systems
Hardware
The Kernel
P1 P2 P3 P4
Processes
(Each process is completely independent)
8/3/2019 Copy (4) of Multi-threading
73/168
K l St t
8/3/2019 Copy (4) of Multi-threading
74/168
74
Kernel Structures
Process ID
UID GID EUID EGID CWD.
Priority
Signal Mask
Registers
Kernel Stack
CPU State
File Descriptors
Signal Dispatch Table
Memory Map
Process ID
UID GID EUID EGID CWD.
File Descriptors
Signal Dispatch Table
Memory Map
Traditional UNIX Process Structure Solaris 2Process Structure
LWP2 LWP1
8/3/2019 Copy (4) of Multi-threading
75/168
75
Scheduling Design Options
M:1
HP-UNIX1:1
DEC, NT, OS/1, AIX. IRIXM:M
2-level
8/3/2019 Copy (4) of Multi-threading
76/168
76
SunOS Two-Level Thread Model
Proc 1 Proc 2 Proc 3 Proc 4 Proc 5
Traditional
process
User
LWPs
Kernel
threads
Kernel
Hardware Processors
8/3/2019 Copy (4) of Multi-threading
77/168
77
Thread Life Cycle
main() main()
{ ... {
pthread_create( func, arg); thr_create( ..func..,arg..);
... ...
} }
void * func()
{
....
}
pthread_exit()
T2
T1
pthread_create(...func...)
POSIX Solaris
8/3/2019 Copy (4) of Multi-threading
78/168
78
Waiting fora Thread to Exit
main() main()
{ ... {
pthread_join(T2); thr_join( T2,&val_ptr);
... ...
} }
void * func()
{
....
}
pthread_exit()
T2
T1
pthread_join()
POSIX Solaris
S h d li St t Si lifi d i
8/3/2019 Copy (4) of Multi-threading
79/168
79
Scheduling States: Simplified Viewof Thread State Transitions
RUNNABLE
SLEEPINGSTOPPED
ACTIVE
Stop
ContinuePreempt Stop
Stop Sleep
Wakeup
8/3/2019 Copy (4) of Multi-threading
80/168
80
Preemption
The process of rudely interrupting a thread andforcing it to relinquish its LWP (or CPU) to another.
CPU2 cannot change CPU3s registers directly. Itcan only issue a hardware interrupt to CPU3. It isup to CPU3s interrupt handler to look at CPU2srequest and decide what to do.
Higher priority threads always preempt lower
priority threads.Preemption ! = Time slicing
All of the libraries are preemptive
8/3/2019 Copy (4) of Multi-threading
81/168
81
EXIT Vs. THREAD_EXIT
The normal C function exit() always causes the processto exit. That means all of the process -- All the threads.
The thread exit functions:
UI : thr_exit()POSIX : pthread_exit()
OS/2 : DosExitThread() and _endthread()
NT : ExitThread() and endthread()
all cause only the calling thread to exit, leaving theprocess intact and all of the other threads running. (Ifno other threads are running, then exit() will be called.)
8/3/2019 Copy (4) of Multi-threading
82/168
82
Cancellation
Cancellation is the means by which a thread can tell anotherthread that it should exit.
main() main() main()
{... {... {...pthread_cancel (T1); DosKillThread(T1); TerminateThread(T1)
} } }
There is no special relation between the killer of a thread and thevictim. (UI threads must roll their own using signals)
(pthread exit)
(pthread cancel()
T1
T2
POSIX OS/2 WindowsNT
8/3/2019 Copy (4) of Multi-threading
83/168
83
Cancellation State and Type
State PTHREAD_CANCEL_DISABLE (Cannot be cancelled)
PTHREAD_CANCEL_ENABLE (Can be cancelled, must considertype)
Type PTHREAD_CANCEL_ASYNCHRONOUS
(any time what-so-ever)(not generally used)
PTHREAD_CANCEL_DEFERRED
(Only at cancellation points)
(Only POSIX has state and type)
(OS/2 is effectivelyalways enabledasynchronous)
(NT is effectivelyalways enabledasynchronous)
8/3/2019 Copy (4) of Multi-threading
84/168
84
Cancellation is Always Complex!
It is very easy to forget a lock thats being held ora resource that should be freed.
Use this only when you absolutely require it.
Be extremely meticulous in analyzing the possiblethread states.
Document, document, document!
8/3/2019 Copy (4) of Multi-threading
85/168
85
Returning Status
POSIX andUI
A detached thread cannot be joined. It cannot returnstatus.
An undetached thread must be joined, and can returna status.
OS/2
Any thread can be waited for
No thread can return status
No thread needs to be waited for. NT
No threads can be waited for
Any thread can return status
8/3/2019 Copy (4) of Multi-threading
86/168
86
Suspending a Thread
main()
{
...
thr_suspend(T1);...
thr_continue(T1);
...
}
continue()
T2
T1
suspend()
Solaris:
* POSIX does not support thread suspension
P d U f
8/3/2019 Copy (4) of Multi-threading
87/168
87
ProposedUses ofSuspend/Continue
Garbage Collectors
Debuggers
Performance Analysers
Other Tools?
These all must go below the API, so theydont count.
Isolation of VM system spooling (?!)
NT Services specify that a service should b
suspendable (Questionable requirement?)
Be Careful
Do NOT Think about
8/3/2019 Copy (4) of Multi-threading
88/168
88
Do NOT ThinkaboutScheduling!
Thinkabout Resource Availability
Thinkabout Synchronization
Thinkabout Priorities
Ideally, if youre using suspend/ continue, youremaking a mistake!
8/3/2019 Copy (4) of Multi-threading
89/168
89
Synchronization
Websters: To represent orarrange events toindicate coincidence or coexistence.
Lewis : To arrange events so that they occur in aspecified order.
* Serializedaccess to controlledresources.
Synchronization is not just an MP issue. It is noteven strictlyan MT issue!
8/3/2019 Copy (4) of Multi-threading
90/168
90
Threads Synchronization :On shared memory : shared variables -
semaphores
On distributed memory : within a task : semaphores Across the tasks : Bypassing messages
Threads Synchronization :On shared memory : shared variables -
semaphores
On distributed memory : within a task : semaphores Across the tasks : Bypassing messages
U h i d Sh d D t
8/3/2019 Copy (4) of Multi-threading
91/168
91
Unsynchronized Shared Datais a Formula for Disaster
Thread1 Thread2
temp = Your - > BankBalance;dividend = temp * InterestRate;
newbalance = dividend + temp;
Your->Dividend += dividend; Your->BankBalance+= deposit;
Your->BankBalance = newbalance;
8/3/2019 Copy (4) of Multi-threading
92/168
92
Atomic Actions
An action which must be startedand completedwith no possibility of interruption.
A machine instruction could need to be atomic.
(not allare!) A line of C code could need to be atomic. (not
allare)
An entire database transaction could need tobe atomic.
All MP machines provide at least one complexatomic instruction, from which you can buildanything.
A section of code which you have forced to be
atomic is a Critical Section.
Critical SectionCritical Section
8/3/2019 Copy (4) of Multi-threading
93/168
93
(Good Programmer!)(Good Programmer!)
reader()
{
- - - - - - - - --
lock(DISK);
...........
...........
...........
unlock(DISK);
- - - - - - - - --
}
reader()
{
- - - - - - - - --
lock(DISK);
...........
...........
...........
unlock(DISK);
- - - - - - - - --
}
writer()
{
- - - - - - - - - -
lock(DISK);
..............
..............
unlock(DISK);
- - - - - - - - - -
}
writer()
{
- - - - - - - - - -
lock(DISK);
..............
..............
unlock(DISK);
- - - - - - - - - -
}
Shared Data
T1
T2
Critical SectionCritical Section
8/3/2019 Copy (4) of Multi-threading
94/168
94
(Bad Programmer!)(Bad Programmer!)
reader()
{
- - - - - - - - --
lock(DISK);
...........
...........
...........
unlock(DISK);
- - - - - - - - --
}
reader()
{
- - - - - - - - --
lock(DISK);
...........
...........
...........
unlock(DISK);
- - - - - - - - --
}
writer()
{
- - - - - - - - - -
..............
..............- - - - - - - - - -
}
writer()
{
- - - - - - - - - -
..............
..............- - - - - - - - - -
}
Shared Data
T1
T2
8/3/2019 Copy (4) of Multi-threading
95/168
95
Lock Shared Data!
Globals
Shareddata structures
Static variables
(really just lexically scoped global variables)
8/3/2019 Copy (4) of Multi-threading
96/168
96
Mutexes
item = create_and_fill_item();
mutex_lock( &m );item->next = list;
list = item;
mutex_unlock(&m);
mutex_lock( &m );
this_item = list;list = list_next;
mutex_unlock(&m);
.....func(this-item);
POSIX and UI : Owner not recorded, block in priorityorder.
OS/2 and NT. Owner recorded, block in FIFO order.
Thread 1 Thread2
S nchroni ation Variables in
8/3/2019 Copy (4) of Multi-threading
97/168
97
Synchronization Variables inShared Memory (Cross Process)
Process 1 Process 2
S SShared MemoryS
S
SynchronizationVariable
Thread
8/3/2019 Copy (4) of Multi-threading
98/168
98
SynchronizationProblems
8/3/2019 Copy (4) of Multi-threading
99/168
99
Deadlocks
lock( M1 );
lock( M2 );
lock( M2 );
lock( M1 );
Thread 1 Thread 2
Thread1 is waiting for the resource(M2) locked by Thread2 and
Thread2 is waiting for the resource (M1) locked by Thread1
Avoiding Deadlocks
8/3/2019 Copy (4) of Multi-threading
100/168
100
Avoiding Deadlocks
Establish a hierarchy : Always lock Mutex_1 before
Mutex_2, etc..,. Use the trylockprimitives if you must violate the hierarchy.
{
while (1)
{ pthread_mutex_lock (&m2);
if( EBUSY |= pthread mutex_trylock (&m1))break;
else
{ pthread _mutex_unlock (&m1);
wait_around_or_do_something_else();
}}
do_real work(); /* Got `em both! */
}
Use lockllint or some similar static analysis program to scan
your code for hierarchy violations.
8/3/2019 Copy (4) of Multi-threading
101/168
101
Race Conditions
A race condition is where the results of a programare different depending upon the timing of theevents within the program.
Some race conditions result in different answersand are clearly bugs.
Thread 1 Thread 2
mutex_lock (&m) mutex_lock (&m)
v = v - 1; v = v * 2;mutex_unlock (&m) mutex_unlock (&m)
--> if v = 1, the result can be 0 or 1based on which threadgets chance to enter CR first
8/3/2019 Copy (4) of Multi-threading
102/168
102
Operating System Issues
8/3/2019 Copy (4) of Multi-threading
103/168
103
Library Goals
Make it fast!
Make it MT safe!
Retain UNIX semantics!
8/3/2019 Copy (4) of Multi-threading
104/168
104
Are Libraries Safe ?
getc() OLD implementation:extern int get( FILE * p )
{
/* code to read data */
}
getc() NEW implementation:extern int get( FILE * p )
{pthread_mutex_lock(&m);
/* code to read data */
pthread_mutex_unlock(&m);
}
8/3/2019 Copy (4) of Multi-threading
105/168
105
ERRNO
In UNIX, the distinguished variable errno is used to hold theerror code for any system calls that fail.
Clearly, should two threads both be issuing system callsaround the same time, it would not be possible to figure out
which one set the value for errno.Therefore errno is defined in the header file to be a call tothread-specific data.
This is done only when the flag_REENTRANT (UI)
_POSIX_C_SOURCE=199506L (POSIX) is passed to thecompiler, allowing older, non-MT programs to continue torun.
There is the potential for problems if you use some librarieswhich are not reentrant. (This is often a problem when usingthird party libraries.)
8/3/2019 Copy (4) of Multi-threading
106/168
106
Are Libraries Safe?
MT-Safe This function is safe
MT-Hot This function is safe and fast
MT-Unsafe This function is not MT-safe, but wascompiled with _REENTRANT
Alternative Call This function is not safe, but there is asimilar function (e.g. getctime_r())
MT-Illegal This function wasnt even compiledwith _REENTRANT and therefore can
only be called from the main thread.
8/3/2019 Copy (4) of Multi-threading
107/168
107
Threads Debugging Interface
Debuggers
Data inspectors
Performance monitors
Garbage collectors Coverage analyzers
Not a standard interface!
8/3/2019 Copy (4) of Multi-threading
108/168
108
The APIs
8/3/2019 Copy (4) of Multi-threading
109/168
OS d S l i iff
8/3/2019 Copy (4) of Multi-threading
110/168
110
POSIX and Solaris API Differences
thread cancellation
scheduling policies
sync attributes
thread attributes
continue
suspend
semaphore vars
concurrency setting
reader/ writer vars
daemon threads
joinexit key creation
priorities sigmask create
thread specific data
mutex vars kill
condition vars
POSIX API Solaris API
E R t V l
8/3/2019 Copy (4) of Multi-threading
111/168
111
Error Return Values
Many threads functions return an error value whichcan be looked up in errno.h.
Very few threads functions set errno(check manpages).
The lack of resources errors usually mean thatyouve used up all your virtual memory, and yourprogram is likely to crash very soon.
Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
112/168
112
Attribute Objects
UI, OS/2, and NT all use flags and direct arguments to indicatewhat the special details of the objects being created should be.POSIX requires the use of Attribute objects:
thr_create(NULL, NULL, foo, NULL, THR_DETACHED);
Vs:
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);
pthread_create(NULL, &attr, foo, NULL);
Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
113/168
113
Attribute Objects
Although a bit of pain in the *** compared to passing all thearguments directly, attribute objects allow the designers of thethreads library more latitude to add functionality without changingthe old interfaces. (If they decide they really want to, say, pass thesignal mask at creation time, they just add a function
pthread_attr_set_signal_mask() instead of adding a new argumentto pthread_create().)
There are attribute objects for:
Threads
stack size, stack base, scheduling policy, scheduling class,scheduling scope, scheduling inheritance, detach state.
MutexesCross process, priority inheritance
Condition Variables
Cross process
8/3/2019 Copy (4) of Multi-threading
114/168
Th d Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
115/168
115
Thread Attribute Objects
pthread_attr_t;
Threadattribute object type:
pthread_attr_init (pthread_mutexattr_t *attr)
pthread_attr_destroy (pthread_attr_t *attr)
pthread_attr_getdetachstate (pthread_attr_t *attr, in *state)
pthread_attr_setdetachstate (pthread_attr_t *attr, int state)
Can the thread be joined?:pthread_attr_getscope(pthread_attr_t *attr, in *scope)
pthread_attr_setscope(pthread_attr_t *attr, int scope)
Th d Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
116/168
116
Thread Attribute Objects
pthread_attr_getinheritpolicy(pthread_attr_t *attr, int *policy)
pthread_attr_setinheritpolicy(pthread_attr_t *attr, int policy)
Will the policy in the attribute object be used?
pthread_attr_getschedpolicy(pthread_attr_t *attr, int *policy)
pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy)
Will the scheduling be RR, FIFO, or OTHER?
pthread_attr_getschedparam(pthread_attr_t *attr, struct schedparam *param)
pthread_attr_setschedparam(pthreadattr_t *attr, struct schedparam *param);
Wha
t will
thepr
iorit
ybe?
Th d Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
117/168
117
Thread Attribute Objects
pthread_attr_getinheritsched(pthread_attr_t *attr, int *inheritsched)
pthread_attr_setinheritsched(pthread_attr_t *attr, int inheritsched)
Will the policy in the attribute object be used?
pthread_attr_getstacksize(pthread_attr_t *attr, int *size)
pthread_attr_setstacksize(pthread_attr_t *attr, int size)
How big will the stack be?
pthread_attr_getstackaddr (pthread_attr_t *attr, size_t *base)
pthread_attr_setstackaddr(pthread_attr_t *attr, size_t base)
What will the stacks base address be?
M t Att ib t Obj t
8/3/2019 Copy (4) of Multi-threading
118/168
118
Mutex Attribute Objects
pthread_mutexattr_t;
mutexattribute object type
pthread_mutexattr_init(pthread_mutexattr_t *attr)
pthread_mutexattr_destroy(pthread_mutexattr_t *attr)
pthread_mutexattr_getshared(pthread_mutexattr_t*attr, int shared)
pthread_mutexattr_setpshared (pthread_mutexattr_t *attr,
int shared)
Will the mutex be sharedacross processes?
8/3/2019 Copy (4) of Multi-threading
119/168
Condition Variable
8/3/2019 Copy (4) of Multi-threading
120/168
120
Attribute Objects
pthread_condattr_t;
CV attribute object type
pthread_condattr_init(pthread_condattr_t * attr)
pthread_condattr_destroy(pthread_condattr_t *attr)pthread_condattr_getpshared (pthread_condattr_t
*attr, int *shared)
pthread_condattr_setpshared (pthread_condattr_t
*attr, int shared)
Will the mutex be sharedacross processes?
Creation and Destruction (UI
8/3/2019 Copy (4) of Multi-threading
121/168
121
(& POSIX)
int thr_create(void *stack_base, size_t stacksize,
void *(*start_routine) (void *), void
* arg, long flags, thread_t thread);
void thr_exit (void *value_ptr);
int thr_join (thread_t thread, void **value_ptr);int pthread_create (pthread_t *thread, const
pthread_attr_t *attr, void *
(*start_routine) (void *), void *arg);
voidpthread_exit (void *value_ptr);
int pthread_join (pthread_t thread, void
**value_ptr);
int pthread_cancel (pthread_t thread);
Suspension (UI & POSIX)
8/3/2019 Copy (4) of Multi-threading
122/168
122
Suspension (UI & POSIX)
int thr_suspend (thread_t target)
int thr_continue (thread_t target)
Changing Priority (UI & POSIX)
8/3/2019 Copy (4) of Multi-threading
123/168
123
Changing Priority (UI & POSIX)
int thr_setpriority(thread_t thread, int priority)
int thr_getpriority(thread_t thread, int *priority)
int pthread_getschedparam(pthread_t thread, int
*policy, struct schedparam*param)
int pthread_setschedparam(pthread_t thread, int
policy, struct schedparam
*param)
Readers / Writer Locks (UI)
8/3/2019 Copy (4) of Multi-threading
124/168
124
Readers /Writer Locks (UI)
int rwlock_init (rwlock_t *rwlock, int type,void *arg);
int rw_rdlock (rwlock_t *rwlock);
int rw_wrlock (rwlock_t *rwlock);int rw_tryrdlock (rwlock_t *rwlock);
int rw_trywrlock (rwlock_t *rwlock);
int rw_unlock (rwlock_t *rwlock);
int rw_destroy (rwlock_t *rwlock);
8/3/2019 Copy (4) of Multi-threading
125/168
Condition Variables (UI &
8/3/2019 Copy (4) of Multi-threading
126/168
126
(POSIX)
int cond_init(contd_t *cond, int type, void *arg)
int cond_wait(cond_t *cond, mutex_t *mutex);
int cond_signal(cond_t *cond)
int cond_broadcast(cond_t *cond)
int cond_timedwait(cond_t *cond, mutex_t *mutex, timestruc_t *abstime)int cond_destroy (cond_t *cond)
int pthread_cond_init(pthread_cond_t *cond,pthread_condattr_t *attr)
int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
int pthread_cond_signal (pthread_cond_t *cond)int pthread_cond_broadcast(pthread_cond_t *cond, pthread_mutex_t
*mutex, struct timespec *abstime)
int pthread_cond_destroy(pthread_cond_t *cond)
Signals (UI & POSIX)
8/3/2019 Copy (4) of Multi-threading
127/168
127
Signals (UI & POSIX)
int thr_sigsetmask(int how, const sigset_t *set, sigset_t *oset);
int thr_kill(thread_t target thread, int sig)
int sigwait(sigset_t *set)
int pthread_sigmask(int how, const sigset_t *set, sigset_t *oset);
int pthread_kill(thread_t target_thread, int sig)
int sigwait(sigset_t *set, int *sig)
8/3/2019 Copy (4) of Multi-threading
128/168
128
Cancellation (POSIX)
int pthread_cancel (pthread_thread_t thread)
int pthread cleanup_pop (int execute)
int pthread_cleanup_push (void (*funtion) (void *),
void *arg)
int pthread_setcancelstate (int state, int *old_state)
int pthread_testcancel (void)
Other APIs
8/3/2019 Copy (4) of Multi-threading
129/168
129
Other APIs
thr_self(void)
thr_yield()
int pthread_atfork (void (*prepare) (void),
void (*parent) (void),
void (*child) (void)pthread_equal (pthread_thread_t tl, pthread_thread_t t2)
pthread_once (pthread_once_t *once_control, void
(*init_routine) (void))
pthread_self (void)
pthread_yield()
(Thread IDs in Solaris recycle every 2^32 threads, orabout once amonth if youdo create/exit as fast as possible.)
8/3/2019 Copy (4) of Multi-threading
130/168
130
Compiling
Solaris Libraries
8/3/2019 Copy (4) of Multi-threading
131/168
131
Solaris Libraries
Solaris has three libraries: libthread.so,libpthread.so, libposix4.so
Corresponding new include files: synch.h,thread.h, pthread.h, posix4.h
Bundled with all O/S releases
Running an MT program requires no extra effort
Compiling an MT program requires onlyacompiler (any compiler!)
Writing an MT program requires onlya compiler(but a few MT tools will come in very handy)
Compiling UI under Solaris
8/3/2019 Copy (4) of Multi-threading
132/168
132
Compiling UI under Solaris
Compiling is no different than for non-MT programs
libthread is just another system library in /usr/lib
Example:%cc -o sema sema.c -lthread -D_REENTRANT
%cc -o sema sema.c -mt All multithreadedprograms should be compiledusing
the _REENTRANT flag
Applies for every module in a new application
If omitted, the olddefinitions for errno, stdio would be
used, which youdont want All MT-safe libraries should be compiledusing the
_REENTRANT flag, even though they may be used singlein a threadedprogram.
Compiling POSIX under
8/3/2019 Copy (4) of Multi-threading
133/168
133
Solaris
Compiling is no different than for non-MT programs
libpthread is just another system library in /usr/lib
Example :%cc-o sema sema.c -lpthread -lposix4
-D_POSIX_C_SOURCE=19956L
All multithreaded programs should be compiled usingthe _POSIX_C_SOURCE=199506L flag
Applies for every module in a new application
If omitted, the old definitions for errno, stdio would beused, which you dont want
All MT-safe libraries should be compiled using the_POSIX_C_SOURCE=199506L flag, even though theymay be used single in a threaded program
Compiling mixedUI/POSIX
8/3/2019 Copy (4) of Multi-threading
134/168
134
under Solaris
If you just want to use the UI thread functions (e.g.,thr_setconcurrency())
%cc-o sema sema.c -1thread -1pthread -1posix4D_REENTRANT -
_POSIX_PTHREAD_SEMANTICS
If youalso want to use the UI semantics for fork(),alarms, timers, sigwait(), etc.,.
Summary
8/3/2019 Copy (4) of Multi-threading
135/168
135
Threads provide a more natural programming paradigm Improve efficiency on uniprocessor systems
Allows to take full advantage of multiprocessor Hardware
Improve Throughput: simple to implement asynchronous
I/O
Leverage special features of the OS
Many applications are already multithreaded
MT is not a silver bullet for all programming problems.
Threre is already standard for multithreading--POSIX
Multithreading support already available in the form oflanguage syntax--Java
Threads allows to model the real world object (ex: in Java)
Threads provide a more natural programming paradigm Improve efficiency on uniprocessor systems
Allows to take full advantage of multiprocessor Hardware
Improve Throughput: simple to implement asynchronous
I/O
Leverage special features of the OS
Many applications are already multithreaded
MT is not a silver bullet for all programming problems.
Threre is already standard for multithreading--POSIX
Multithreading support already available in the form oflanguage syntax--Java
Threads allows to model the real world object (ex: in Java)
Java
8/3/2019 Copy (4) of Multi-threading
136/168
136
Java
Multithreading in Java
Java - An Introduction
8/3/2019 Copy (4) of Multi-threading
137/168
137
Java An Introduction
Java - The new programming language from SunMicrosystems
Java -Allows anyone to publish a web page withJava code in it
Java - CPU Independent language
Created for consumer electronics
Java - James , Arthur Van , and others
Java -The name that survivedapatent search
Oak -The predecessor of Java
Java is C++ -- ++
8/3/2019 Copy (4) of Multi-threading
138/168
Sun defines Java as:
8/3/2019 Copy (4) of Multi-threading
139/168
139
Sun defines Javaas:
Simple and PowerfulSimple and Powerful
SafeSafe
Object OrientedObject Oriented
RobustRobust Architecture Neutraland PortableArchitecture Neutraland Portable
InterpretedandHigh PerformanceInterpretedandHigh Performance
ThreadedThreaded
DynamicDynamic
8/3/2019 Copy (4) of Multi-threading
140/168
140
Java Integrates
Power of Compiled Languages
and
Flexibility of Interpreted
Languages
Classes and Objects
8/3/2019 Copy (4) of Multi-threading
141/168
141
Classes and Objects
Classes and Objects
Method Overloading
Method Overriding
Abstract Classes Visibility modifiers
default
public
protectedprivate protected , private
Threads
8/3/2019 Copy (4) of Multi-threading
142/168
142
Threads
Java has built in thread support for Multithreading
Synchronization
Thread Scheduling
Inter-Thread Communication:currentThread start setPriority
yield run getPriority
sleep stop suspend
resume Java Garbage Collector is alow-priority thread
Ways of Multithreading in Java
8/3/2019 Copy (4) of Multi-threading
143/168
143
Create a class that extends the Thread class
Create a class that implements the Runnable interface
1st Method: Extending the Thread classclass MyThread extends Thread
{
public void run()
{
// thread body of execution
}
} Creating thread:
MyThread thr1 = new MyThread();
Start Execution:
thr1.start();
2nd method: Threads by implementingRunnable interface
8/3/2019 Copy (4) of Multi-threading
144/168
144
Runnable interface
class ClassName implements Runnable
{
.....
public void run()
{
// thread body of execution}
}
Creating Object:
ClassName myObject = new ClassName();
Creating Thread Object:
Thread thr1 = new Thread( myObject );
Start Execution:
thr1.start();
Thread Class Members...
public class java.lang.Thread extends java.lang.Object
i l j l bl
8/3/2019 Copy (4) of Multi-threading
145/168
145
implements java.lang.Runnable
{
// Fieldspublic final static int MAX_PRIORITY;
public final static int MIN_PRIORITY;
public final static int NORM_PRIORITY;
// Constructors
public Thread();
public Thread(Runnable target);
public Thread(Runnable target, String name);public Thread(String name);
public Thread(ThreadGroup group, Runnable target);
public Thread(ThreadGroup group, Runnable target, String name);
public Thread(ThreadGroup group, String name);
// Methods
public static int activeCount();
public void checkAccess();
public int countStackFrames();
public static Thread currentThread();
public void destroy();
public static void dumpStack();
public static int enumerate(Thread tarray[]);
public final String getName();
...Thread Class Members.
public final int getPriority(); // 1 to 10 priority-pre-emption at mid.
bli fi l Th dG tTh dG ()
8/3/2019 Copy (4) of Multi-threading
146/168
146
public final ThreadGroup getThreadGroup();
public void interrupt();
public static boolean interrupted();public final boolean isAlive();
public final boolean isDaemon();
public boolean isInterrupted();
public final void join();
public final void join(long millis);
public final void join(long millis, int nanos);
public final void resume();public void run();
public final void setDaemon(boolean on);
public final void setName(String name);
public final void setPriority(int newPriority);
public static void sleep(long millis);
public static void sleep(long millis, int nanos);
public void start();
public final void stop();
public final void stop(Throwable obj);
public final void suspend();
public String toString();
public static void yield();
}
Manipulation of Current Thread
// CurrentThreadDemo.java
l C tTh dD {
8/3/2019 Copy (4) of Multi-threading
147/168
147
class CurrentThreadDemo {
public static void main(String arg[]) {
Thread ct = Thread.currentThread();ct.setName( "My Thread" );
System.out.println("Current Thread : "+ct);
try {
for(int i=5; i>0; i--) {
System.out.println(" " + i);
Thread.sleep(1000);
}}
catch(InterruptedException e) {
System.out.println("Interrupted."); }
}
}
Run:
Current Thread : Thread[My Thread,5,main]
5
4
3
2
1
8/3/2019 Copy (4) of Multi-threading
148/168
...Creating new Thread.public void run() {
try {
8/3/2019 Copy (4) of Multi-threading
149/168
149
for(int i=5; i>0; i--) {
System.out.println(" " + i);
Thread.sleep(1000);
} }
catch(InterruptedException e) {
System.out.println("Child interrupted.");
}
System.out.println("Exiting child thread.");
}public static void main(String args[]) {
new ThreadDemo();
}
}
Run:
Current Thread : Thread[main,5,main]
54
3
Exiting main thread.
2
1
Exiting child thread.
Thread Priority...
// HiLoPri.java
class Clicker implements Runnable {
8/3/2019 Copy (4) of Multi-threading
150/168
150
class Clicker implements Runnable {
int click = 0;
private Thread t;
private boolean running = true;
public Clicker(int p)
{
t = new Thread(this);
t.setPriority(p);
}
public void run(){
while(running)
click++;
}
public void start()
{
t.start();
}
public void stop()
{
running = false;
}
}
...Thread Priorityclass HiLoPri
{
8/3/2019 Copy (4) of Multi-threading
151/168
151
{
public static void main(String args[])
{Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
Clicker Hi = new Clicker(Thread.NORM_PRIORITY+2);
Clicker Lo = new Clicker(Thread.NORM_PRIORITY-2);
Lo.start();
Hi.start();
try {
Thread.sleep(10000);}
catch (Exception e)
{ }
Lo.stop();
Hi.stop();
System.out.println(Lo.click + " vs. " + Hi.click);
}
}
Run1: (on Solaris)
0 vs. 956228
Run2: (Window 95)
304300 vs. 4066666
8/3/2019 Copy (4) of Multi-threading
152/168
Threads Synchronisation...// Synch.java: race-condition without synchronisation
class Callme {
8/3/2019 Copy (4) of Multi-threading
153/168
153
// Check synchronized and unsynchronized methods
/* synchronized */ void call(String msg)
{System.out.print("["+msg);
try {
Thread.sleep(1000);
}
catch(Exception e)
{ }
System.out.println("]");
}
}
class Caller implements Runnable
{
String msg;
Callme Target;public Caller(Callme t, String s)
{
Target = t;
msg = s;
new Thread(this).start();
}
...Threads Synchronisation.public void run() {
Target.call(msg);
8/3/2019 Copy (4) of Multi-threading
154/168
154
g g
}
}
class Synch {
public static void main(String args[]) {
Callme Target = new Callme();
new Caller(Target, "Hello");
new Caller(Target, "Synchronized");
new Caller(Target, "World");
}}
Run 1: With unsynchronized call method (race condition)
[Hello[Synchronized[World]
]
]
Run 2: With synchronized call method
[Hello]
[Synchronized]
[World]
Run3: With Synchronized object
synchronized(Target)
{ Target.call(msg); }
The output is the same as Run2
Queue (no inter-threaded communication)...
// pc.java: produce and consumer
8/3/2019 Copy (4) of Multi-threading
155/168
155
// pc.java: produce and consumer
class Queue
{
int n;
synchronized int get()
{
System.out.println("Got : "+n);
return n;
}
synchronized voidput(int n){
this.n = n;
System.out.println("Put : "+n);
}
}
class Producer implements Runnable
{
Queue Q;
Producer(Queue q)
{
Q = q;
new Thread( this, "Producer").start();
}
Queue (no inter-threaded communication)...public void run()
{
8/3/2019 Copy (4) of Multi-threading
156/168
156
int i = 0;
while(true)
Q.put(i++);}
}
class Consumer implements Runnable
{
Queue Q;
Consumer(Queue q)
{
Q = q;
new Thread( this, "Consumer").start();
}
public void run()
{while(true)
Q.get();
}
}
...Queue (no inter-threaded communication).
class PC
8/3/2019 Copy (4) of Multi-threading
157/168
157
{
public static void main(String[] args)
{
Queue Q = new Queue();
new Producer(Q);
new Consumer(Q);
}
}
Run:Put: 1
Got: 1
Got: 1
Got: 1
Put: 2
Put: 3
Got: 3
^C
8/3/2019 Copy (4) of Multi-threading
158/168
Queue (interthread communication)...
synchronized voidput(int n)
8/3/2019 Copy (4) of Multi-threading
159/168
159
y p
{
try {
if(ValueSet)
wait();
}
catch(InterruptedException e)
{ }
this.n = n;
System.out.println("Put : "+n);ValueSet = true;
notify();
}
}
class Producer implements Runnable
{
Queue Q;
Producer(Queue q)
{
Q = q;
new Thread( this, "Producer").start();
}
8/3/2019 Copy (4) of Multi-threading
160/168
...Queue (no interthread communication).
class PCnew
8/3/2019 Copy (4) of Multi-threading
161/168
161
{
public static void main(String[] args)
{
Queue Q = new Queue();
new Producer(Q);
new Consumer(Q);
}
}
Run:Put : 0
Got : 0
Put : 1
Got : 1
Put : 2
Got : 2
Put : 3
Got : 3
Put : 4
Got : 4
^C
Deadlock...
// DeadLock.java
8/3/2019 Copy (4) of Multi-threading
162/168
162
class A
{
synchronized void foo(B b)
{
String name = Thread.currentThread().getName();
System.out.println(name + " entered A.foo");
try
{
Thread.sleep(1000);}
catch(Exception e)
{
}
System.out.println(name + " trying to call B.last()");
b.last();
}
synchronized void last()
{
System.out.println("Inside A.last");
}
}
8/3/2019 Copy (4) of Multi-threading
163/168
...Deadlock.
class DeadLock implements Runnable {
A a = new A();
8/3/2019 Copy (4) of Multi-threading
164/168
164
B b = new B();
DeadLock() {
Thread.currentThread().setName("Main Thread");
new Thread(this).start();
a.foo(b);
System.out.println("Back in the main thread.");
}
public void run() {
Thread.currentThread().setName("Racing Thread");b.bar(a);
System.out.println("Back in the other thread");
}
public static void main(String args[]) {
new DeadLock();
}
}
Run:
Main Thread entered A.foo
Racing Thread entered B.bar
Main Thread trying to call B.last()
Racing Thread trying to call A.last()
^C
Grand Challenges(Is PP Practical?)
Grand Challenges(Is PP Practical?)
8/3/2019 Copy (4) of Multi-threading
165/168
165
(Is PP Practical?)(Is PP Practical?)
Need OS and Compiler support to use multiprocessormachines.
Ideal would be for the user to be unaware if the problem isrunning on sequential or parallel hardware - a long way to
go. With Highspeed Networks and improved microprocessor
performance, multiple stand-alone machines can also beused as a parallel machine - a Popular Trend. (appealingvehicle for parallel computing)
Language standards have to evolve. (Portability). Re-orientation of thinking Sequential Parallel
Need OS and Compiler support to use multiprocessormachines.
Ideal would be for the user to be unaware if the problem isrunning on sequential or parallel hardware - a long way to
go. With Highspeed Networks and improved microprocessor
performance, multiple stand-alone machines can also beused as a parallel machine - a Popular Trend. (appealingvehicle for parallel computing)
Language standards have to evolve. (Portability). Re-orientation of thinking Sequential Parallel
Grand Challenges(Is PP Practical?)
Grand Challenges(Is PP Practical?)
8/3/2019 Copy (4) of Multi-threading
166/168
166
(Is PP Practical?)(Is PP Practical?)
Language standards have to evolve.(Portability).
Re-orientation of thinking Sequential Parallel
Language standards have to evolve.(Portability).
Re-orientation of thinking Sequential Parallel
Breaking High Performance Computing BarriersBreaking High Performance Computing Barriers
8/3/2019 Copy (4) of Multi-threading
167/168
167
2100
2100 2100 21002100
2100 2100 21002100
Single
Processor
Shared
Memory
Local
Parallel
Cluster
Global
Parallel
Cluster
G
FL
O
P
S
Thank You ...
Thank You ...
8/3/2019 Copy (4) of Multi-threading
168/168