1
COSC 6385
Computer Architecture
- Thread Level Parallelism (III)
Edgar Gabriel
Spring 2014
Some slides are based on a lecture by David Culler, University of California, Berkley
http://www.eecs.berkeley.edu/~culler/courses/cs252-s05
Larger Shared Memory Systems
• Typically Distributed Shared Memory Systems
• Local or remote memory access via memory controller
• Directory per cache that tracks state of every block in every cache
– Which caches have a copy of block, dirty vs. clean, ...
• Info per memory block vs. per cache block?
– PLUS: In memory => simpler protocol (centralized/one location)
– MINUS: In memory => directory is ƒ(memory size) vs. ƒ(cache
size)
• Prevent directory as bottleneck?
distribute directory entries with memory, each keeping track of
which Procs have copies of their blocks
http://www.eecs.berkeley.edu/~culler/courses/cs252-s05http://www.eecs.berkeley.edu/~culler/courses/cs252-s05http://www.eecs.berkeley.edu/~culler/courses/cs252-s05http://www.eecs.berkeley.edu/~culler/courses/cs252-s05
2
Distributed Directory MPs
Distributed Shared Memory Systems
3
AMD 8350 quad-core Opteron process • Single processor configuration
– Private L1 cache: 32 KB data, 32 KB instruction
– Private L2 cache: 512 KB unified
– Shared L3 cache: 2 MB unified
– Centralized shared memory system
Core
L1
Core
L1
shared L3
L2 L2
Core
L1
Core
L1
L2 L2
crossbar
3 Hyper-transports
2 Mem. Controller
AMD 8350 quad-core Opteron
• Multi-processor configuration
– Distributed shared memory system
C
14
C
15
L3
C
12
C
13
C
10
C
11
L3
C
8
C
9
C
2
C
3
L3
C
0
C
1
C
6
C
7
L3
C
4
C
5
Mem
ory
8 GB/s
Mem
ory
M
em
ory
M
em
ory
HT HT HT
HT HT HT HT HT HT
HT HT HT 8 GB/s
8 GB/s 8 GB/s
Socket 3 Socket 2
Socket 0 Socket 1
4
Programming distributed shared
memory systems • Programmers must use threads or processes
• Spread the workload across multiple cores
• Write parallel algorithms
• OS will map threads/processes to cores
• True concurrency, not just uni-processor time-slicing
– Pre-emptive context switching: context switch can
happen at any time
– Concurrency bugs exposed much faster with multi-core
Slide based on a lecture of Jernej Barbic, MIT,
http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt
Programming distributed shared
memory systems • Each thread/process has an affinity mask
– Specifies what cores the thread is allowed to run on
– Different threads can have different masks
– Affinities are inherited across fork()
• Example: 4-way multi-core, without SMT
• Process/thread is allowed to run on cores 0,2,3, but not on core 1
1 0 1 1
core 3 core 2 core 1 core 0
Slide based on a lecture of Jernej Barbic, MIT,
http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt
http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppthttp://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt
5
Process migration is costly
• Default Affinities
– Default affinity mask is all 1s: all threads can run on all processors and cores
– OS scheduler decides which thread runs on which core
– OS scheduler detects skewed workloads, migrating threads to less busy processors
• Soft affinity:
– Tendency of a scheduler to try to keep processes on the same CPU as long as possible
• Hard affinity:
– Affinity information has been explicitly set by application
– OS has to adhere to this setting
Linux Kernel scheduler API
Retrieve the current affinity mask of a process
#include
#include
#include
#include
unsigned int len = sizeof(cpu_set_t);
cpu_set_t mask;
pid_t pid = getpid();/* get the process id of this app */
ret = sched_getaffinity (pid, len, &mask);
if ( ret != 0 )
printf(“Error in getaffinity %d (%s)\n”,
errno, strerror(errno);
for (i=0; i
6
Linux Kernel scheduler API (II)
Set the affinity mask of a process
unsigned int len = sizeof(cpu_set_t);
cpu_set_t mask;
pid_t pid = getpid();/* get the process id of this app */
/* clear the mask */
CPU_ZERO (&mask);
/* set the mask such that the process is only allowed to execute on the desired CPU */
CPU_SET ( cpu_id, &mask);
ret = sched_setaffinity (pid, len, &mask);
if ( ret != 0 ) {
printf(“Error in setaffinity %d (%s)\n”,
errno, strerror(errno);
}
Linux Kernel scheduler API (III)
• Setting thread-related affinity information
– Use sched_setaffinity with a pid = 0
• Changes the affinity settings for this thread only
– Use libnuma functionality
• Modifying affinity information based on CPU sockets,
not on cores
– Use pthread functions on most linux systems
numa_run_on_node();
numa_run_on_node_mask();
#define __USE_GNU
pthread_setaffinity_np(thread_t t, len, mask);
pthread_attr_setaffinity_np ( thread_attr_t a, len, mask);
7
Directory based Cache Coherence
Protocol • Similar to Snoopy Protocol: Three states
– Shared: ≥ 1 processors have data, memory up-to-date
– Uncached (no processor has it; not valid in any cache)
– Exclusive: 1 processor (owner) has data;
memory out-of-date
• In addition to cache state, must track which processors have
data when in the shared state (usually bit vector, 1 if
processor has copy)
• Assumptions:
– Writes to non-exclusive data => write miss
– Processor blocks until access completes
– Assume messages received and acted upon in order sent
Directory Protocol
• No bus and don’t want to broadcast:
– interconnect no longer single arbitration point
– all messages have explicit responses
• Terms: typically 3 processors involved
– Local node where a request originates
– Home node where the memory location of an address
resides
– Remote node has a copy of a cache block, whether
exclusive or shared
• Example messages on next slide:
P = processor number, A = address
8
Directory Protocol Messages Message type Source Destination Msg Content
Read miss Local cache Home directory P, A
– Processor P reads data at address A;
make P a read sharer and arrange to send data back
Write miss Local cache Home directory P, A
– Processor P writes data at address A;
make P the exclusive owner and arrange to send data back
Invalidate Home directory Remote caches A
– Invalidate a shared copy at address A.
Fetch Home directory Remote cache A
– Fetch the block at address A and send it to its home directory
Fetch/Invalidate Home directory Remote cache A
– Fetch the block at address A and send it to its home directory; invalidate
the block in the cache
Data value reply Home directory Local cache Data
– Return a data value from the home memory (read miss response)
Data write-back Remote cache Home directory A, Data
– Write-back a data value for address A (invalidate response)
State Transition Diagram for an
Individual Cache Block in a
Directory Based System
• States identical to snoopy case; transactions very
similar.
• Transitions caused by read misses, write misses,
invalidates, data fetch requests
• Generates read miss & write miss msg to home
directory.
• Write misses that were broadcast on the bus for
snooping => explicit invalidate & data fetch requests.
• Note: on a write, a cache block is bigger, so need to
read the full cache block
9
CPU -Cache State Machine
• State machine
for CPU requests
for each
memory block
• Invalid state
if in
memory Fetch/Invalidate
send Data Write Back message
to home directory
Invalidate
Invalid
Shared
(read/only)
Exclusive
(read/writ)
CPU Read
CPU Read hit
Send Read Miss
message
CPU Write:
Send Write Miss
msg to h.d. CPU Write:Send
Write Miss message
to home directory
CPU read hit
CPU write hit
Fetch: send Data Write Back
message to home directory
CPU write miss:
send Data Write Back message
and Write Miss to home
directory
CPU read miss: send Data
Write Back message and
read miss to home directory
State Transition Diagram for the
Directory
• Same states & structure as the transition diagram for
an individual cache
• 2 actions: update of directory state & send msgs to
satisfy requests
• Tracks all copies of memory block.
• Also indicates an action that updates the sharing set,
Sharers, as well as sending a message.
10
• State machine
for Directory requests
for each
memory block
• Uncached state
if in memory
Directory State Machine
Write Miss:
Sharers = {P};
send Fetch/Invalidate;
send Data Value Reply
msg to remote cache
Data Write Back:
Sharers = {}
(Write back block)
Uncached Shared
(read only)
Exclusive
(read/writ)
Read miss:
Sharers = {P}
send Data Value
Reply
Write Miss:
send Invalidate
to Sharers;
then Sharers = {P};
send Data Value
Reply msg
Write Miss:
Sharers = {P};
send Data
Value Reply
msg
Read miss:
Sharers += {P};
send Fetch;
send Data Value Reply
msg to remote cache
(Write back block)
Example Directory Protocol
• Message sent to directory causes two actions:
– Update the directory
– More messages to satisfy request
• Block is in Uncached state: the copy in memory is the current value; only possible
requests for that block are:
– Read miss: requesting processor sent data from memory &requestor made only
sharing node; state of block made Shared.
– Write miss: requesting processor is sent the value & becomes the Sharing node.
The block is made Exclusive to indicate that the only valid copy is cached.
Sharers indicates the identity of the owner.
• Block is Shared => the memory value is up-to-date:
– Read miss: requesting processor is sent back the data from memory &
requesting processor is added to the sharing set.
– Write miss: requesting processor is sent the value. All processors in the set
Sharers are sent invalidate messages, & Sharers is set to identity of requesting
processor. The state of the block is made Exclusive.
11
Example Directory Protocol • Block is Exclusive: current value of the block is held in the cache of the
processor identified by the set Sharers (the owner) => three possible directory
requests:
– Read miss: owner processor sent data fetch message, causing state of block
in owner’s cache to transition to Shared and causes owner to send data to
directory, where it is written to memory & sent back to requesting
processor.
Identity of requesting processor is added to set Sharers, which still contains
the identity of the processor that was the owner (since it still has a
readable copy). State is shared.
– Data write-back: owner processor is replacing the block and hence must
write it back, making memory copy up-to-date
(the home directory essentially becomes the owner), the block is now
Uncached, and the Sharer set is empty.
– Write miss: block has a new owner. A message is sent to old owner causing
the cache to send the value of the block to the directory from which it is
sent to the requesting processor, which becomes the new owner. Sharers is
set to identity of new owner, and state of block is made Exclusive.
Example
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1
P1: Read A1
P2: Read A1
P2: Write 40 to A2
P2: Write 20 to A1
A1 and A2 map to the same cache block
Processor 1 Processor 2 Interconnect Memory Directory
12
Example
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1}
Excl. A1 10 DaRp P1 A1 0
P1: Read A1
P2: Read A1
P2: Write 40 to A2
P2: Write 20 to A1
A1 and A2 map to the same cache block
Processor 1 Processor 2 Interconnect Memory Directory
Example
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1}
Excl. A1 10 DaRp P1 A1 0
P1: Read A1 Excl. A1 10
P2: Read A1
P2: Write 40 to A2
P2: Write 20 to A1
A1 and A2 map to the same cache block
Processor 1 Processor 2 Interconnect Memory Directory
13
Example
P2: Write 20 to A1
A1 and A2 map to the same cache block
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1}
Excl. A1 10 DaRp P1 A1 0
P1: Read A1 Excl. A1 10
P2: Read A1 Shar. A1 RdMs P2 A1
Shar. A1 10 Ftch P1 A1 10 10
Shar. A1 10 DaRp P2 A1 10 A1 Shar. {P1,P2} 10
10
10
P2: Write 40 to A2 10
Processor 1 Processor 2 Interconnect Memory Directory
Write Back
Example
P2: Write 20 to A1
A1 and A2 map to the same cache block
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1}
Excl. A1 10 DaRp P1 A1 0
P1: Read A1 Excl. A1 10
P2: Read A1 Shar. A1 RdMs P2 A1
Shar. A1 10 Ftch P1 A1 10 10
Shar. A1 10 DaRp P2 A1 10 A1 Shar. {P1,P2} 10
Excl. A1 20 WrMs P2 A1 10
Inv. Inval. P1 A1 A1 Excl. {P2} 10
P2: Write 40 to A2 10
Processor 1 Processor 2 Interconnect Memory Directory
14
Example
P2: Write 20 to A1
A1 and A2 map to the same cache block
P1 P2 Bus Directory Memory
step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value
P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1}
Excl. A1 10 DaRp P1 A1 0
P1: Read A1 Excl. A1 10
P2: Read A1 Shar. A1 RdMs P2 A1
Shar. A1 10 Ftch P1 A1 10 10
Shar. A1 10 DaRp P2 A1 10 A1 Shar. {P1,P2} 10
Excl. A1 20 WrMs P2 A1 10
Inv. Inval. P1 A1 A1 Excl. {P2} 10
P2: Write 40 to A2 WrMs P2 A2 A2 Excl. {P2} 0
WrBk P2 A1 20 A1 Unca. {} 20
Excl. A2 40 DaRp P2 A2 0 A2 Excl. {P2} 0
Processor 1 Processor 2 Interconnect Memory Directory
A1
Implementing a Directory
• We assume operations atomic, but they are not; reality
is much harder; must avoid deadlock when run out of
buffers in network
• Optimizations:
– read miss or write miss in Exclusive: send data directly to
requestor from owner vs. 1st to memory and then from
memory to requestor
15
Intel Sandy Bridge Architecture
• Newest generation of Intel Architecture
• Desktop version integrates regular processor and
graphics cards on one chip
Intel Sandy-Bridge
• Sandy Bridge now contains mem. Controller, QTI, and graphics
processor on chip
– AMD first integrated memory controller and HTI on the chip
• Instruction fetch: decoding variable length uops is complex and
expensive
– Sandy Bridge introduces a uops cache: a hit in the uop cache
will bypass decoding logic
– Uop cache is organized into 32sets, each 8 –way, 6 uops per set
– Included physically in the L1 cache
– Predicted address will probe uop cache: if found, instruction
bypass decoding step
16
Intel Sandy Bridge
• All 256bit AVX instructions can execute as a single uop
– In contrary to AMD, where they are broken down to 2 128
bit AVX instructions
– FP data path is however only 128 bits wide on SB
• Functional units are grouped into three domain:
– Integer, SIMD integer and FP
– Free bypassing within each domain, but a 1-2 cc penalty
for instructions bypassing between the different domains
• Simplifies the forwarding logic between the domains
for rarely used situations
Intel Sandy Bridge
• A ring interconnects the cores,
graphics, and L3 cache
– composed of four different
rings: request, snoop,
acknowledge and a 32B wide
data ring.
– responsible for a distributed
communication protocol that
enforces coherency and
ordering.
Source: http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937
http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937
17
AMD Istanbul/Magny-Cours processor
Source: http://www.phys.uu.nl/~euroben/reports/web10/amd.php
AMD Interlagos Processor
• First generation of the new Bulldozer architecture
• Two cores form a module
• Each module share an L1I cache, floating point unit
(FPU) and L2 cache,
– saves area and power to pack in more cores and attain
higher throughput
– Leads to degradation in terms of per-core performance.
• All modules in a chip share the L3 cache
18
AMD Interlagos Processor
Source: http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333
http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333