EECC722 - Shaaban EECC722 - Shaaban #1 Lec # 2 Fall 2010 9-6-20 Simultaneous Simultaneous Multithreading (SMT) Multithreading (SMT) • An evolutionary processor architecture originally introduced in 1995 by Dean Tullsen at the University of Washington that aims at reducing resource waste in wide issue processors (superscalars). • SMT has the potential of greatly enhancing superscalar processor computational capabilities by: – Exploiting thread-level parallelism (TLP) in a single processor core, simultaneously issuing, executing and retiring instructions from different threads during the same cycle. • A single physical SMT processor core acts as a number of logical processors each executing a single thread – Providing multiple hardware contexts, hardware thread scheduling and context switching capability. – Providing effective long latency hiding. • e.g FP, branch misprediction, memory latency
42
Embed
EECC722 - Shaaban #1 Lec # 2 Fall 2010 9-6-2010 Simultaneous Multithreading (SMT) An evolutionary processor architecture originally introduced in 1995.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Simultaneous Multithreading (SMT)Simultaneous Multithreading (SMT)• An evolutionary processor architecture originally introduced in 1995 by Dean Tullsen at
the University of Washington that aims at reducing resource waste in wide issue processors (superscalars).
• SMT has the potential of greatly enhancing superscalar processor computational capabilities by:
– Exploiting thread-level parallelism (TLP) in a single processor core, simultaneously issuing, executing and retiring instructions from different threads during the same cycle.
• A single physical SMT processor core acts as a number of logical processors each executing a single thread
– Providing multiple hardware contexts, hardware thread scheduling and context switching capability.– Providing effective long latency hiding.
SMT IssuesSMT Issues• SMT CPU performance gain potential.• Modifications to Superscalar CPU architecture to support SMT.• SMT performance evaluation vs. Fine-grain multithreading, Superscalar, Chip
Multiprocessors.• Hardware techniques to improve SMT performance:
– Optimal level one cache configuration for SMT.– SMT thread instruction fetch, issue policies.
– Instruction recycling (reuse) of decoded instructions.• Software techniques:
– Compiler optimizations for SMT.– Software-directed register deallocation.– Operating system behavior and optimization.
• SMT support for fine-grain synchronization.• SMT as a viable architecture for network processors.• Current SMT implementation: Intel’s Hyper-Threading (2-way SMT)
Microarchitecture and performance in compute-intensive workloads.
Microprocessor Frequency TrendMicroprocessor Frequency Trend
Result:Deeper PipelinesLonger stallsHigher CPI(lowers effective performance per cycle)
1. Frequency used to double each generation2. Number of gates/clock reduce by 25%3. Leads to deeper pipelines with more stages (e.g Intel Pentium 4E has 30+ pipeline stages)
Realty Check:Clock frequency scalingis slowing down!(Did silicone finally hit the wall?)
386486
Pentium(R)
Pentium Pro(R)
Pentium(R) II
MPC750604+604
601, 603
21264S
2126421164A
2116421064A
21066
10
100
1,000
10,000
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
Mh
z
1
10
100
Gat
e D
elay
s/ C
lock
Intel
IBM Power PC
DEC
Gate delays/clock
Processor freq scales by 2X per
generation
Why?1- Power leakage2- Clock distribution delays
T = I x CPI x C
Possible Solutions?- Exploit Thread-Level Parallelism (TLP) at the chip level (SMT/CMP)- Utilize/integrate more-specialized computing elements other than GPPs
F etc h i M em ory iExec ute iD ec ode i W ritebac k i
Regis ter F ile
P C
S P
F etc h i+ 1 M em ory i+ 1Exec ute i+ 1D ec ode i+ 1W ritebac k
i+ 1
Mem
ory Hierarchy (M
anagement)
F etc h i M em ory iExec ute iD ec ode i W ritebac k i
CPU Architecture Evolution:CPU Architecture Evolution:
Single-Threaded/Superscalar ArchitecturesSingle-Threaded/Superscalar Architectures• Fetch, issue, execute, etc. more than one instruction per cycle (CPI < 1).• Limited by instruction-level parallelism (ILP). Due to single thread limitations
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
Average IPC= 1.5instructions/cycleissue rate
Sources of Unused Issue Cycles in an 8-issue Superscalar Processor.
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
Superscalar Architecture Limitations :Superscalar Architecture Limitations :All possible causes of wasted issue slots, and latency-hiding or latency reducing
techniques that can reduce the number of cycles wasted by each cause.
Main Issue: One Thread leads to limited ILP (cannot fill issue slots)
Solution: Exploit Thread Level Parallelism (TLP) within a single microprocessor chip:
Simultaneous Multithreaded (SMT) Processor:-The processor issues and executes instructions from a number of threads creating a number of logical processors within a single physical processor e.g. Intel’s HyperThreading (HT), each physical processor executes instructions from two threads
AND/OR
Chip-Multiprocessors (CMPs):- Integrate two or more complete processor cores on the same chip (die)- Each core runs a different thread (or program)- Limited ILP is still a problem in each core (Solution: combine this approach with SMT)
Single Chip Multiprocessors (CMPs)Single Chip Multiprocessors (CMPs)• Strengths:
– Create a single processor block and duplicate.– Exploits Thread-Level Parallelism.– Takes a lot of the dependency analysis out of HW and places focus on
smart compilers.
• Weakness: – Performance within each processor still limited by individual thread
performance (ILP).– High power requirements using current VLSI processes.
• Almost entire processor cores are replicated on chip.• May run at lower clock rates to reduce heat/power consumption.
Advanced CPU Architectures:Advanced CPU Architectures:
e.g IBM Power 4/5, Intel Pentium D, Core Duo, Core 2 (Conroe), Core i7
Fine-grained or Traditional Multithreaded ProcessorsFine-grained or Traditional Multithreaded Processors
• Multiple hardware contexts (PC, SP, and registers).• Only one context or thread issues instructions each cycle.• Performance limited by Instruction-Level Parallelism (ILP)
within each individual thread:– Can reduce some of the vertical issue slot waste.– No reduction in horizontal issue slot waste.
• Example Architecture: The Tera Computer System
Advanced CPU Architectures:Advanced CPU Architectures:
Fine-grain or Traditional Multithreaded ProcessorsFine-grain or Traditional Multithreaded Processors
The Tera (Cray) Computer System• The Tera computer system is a shared memory multiprocessor
that can accommodate up to 256 processors.
• Each Tera processor is fine-grain multithreaded:
– Each processor can issue one 3-operation Long Instruction Word (LIW) every 3 ns cycle (333MHz) from among as many as 128 distinct instruction streams (hardware threads), thereby hiding up to 128 cycles (384 ns) of memory latency.
– In addition, each stream can issue as many as eight memory references without waiting for earlier ones to finish, further augmenting the memory latency tolerance of the processor.
– A stream implements a load/store architecture with three addressing modes and 31 general-purpose 64-bit registers.
– The instructions are 64 bits wide and can contain three operations: a memory reference operation (M-unit operation or simply M-op for short), an arithmetic or logical operation (A-op), and a branch or simple arithmetic or logical operation (C-op).
SMT: Simultaneous Multithreading• Multiple Hardware Contexts (or threads) running at the same time (HW
context: registers, PC, and SP etc.).
• A single physical SMT processor core acts (and reports to the operating system) as a number of logical processors each executing a single thread
• Reduces both horizontal and vertical waste by having multiple threads keeping functional units busy during every cycle.
• Builds on top of current time-proven advancements in CPU design: superscalar, dynamic scheduling, hardware speculation, dynamic HW branch prediction, multiple levels of cache, hardware pre-fetching etc.
• Enabling Technology: VLSI logic density in the order of hundreds of millions of transistors/Chip.– Potential performance gain is much greater than the increase in chip area
and power consumption needed to support SMT.• Improved Performance/Chip Area/Watt (Computational Efficiency) vs. single-threaded
superscalar cores.
2-way SMT processor 10-15% increase in areaVs. ~ 100% increase for dual-core CMP
SMT• With multiple threads running penalties from long-latency
operations, cache misses, and branch mispredictions will be hidden:– Reduction of both horizontal and vertical waste and thus
improved Instructions Issued Per Cycle (IPC) rate.
• Functional units are shared among all contexts during every cycle:
– More complicated register read and writeback stages.
• More threads issuing to functional units results in higher resource utilization.
• CPU resources may have to resized to accommodate the additional demands of the multiple threads running.– (e.g cache, TLBs, branch prediction tables, rename registers)
context = hardware thread
Thus SMT is an effective long latency-hiding technique
Modifications to Superscalar CPUs to Support SMTModifications to Superscalar CPUs to Support SMT
Necessary Modifications:Necessary Modifications:
• Multiple program counters and some mechanism by which one fetch unit selects one each cycle (thread instruction fetch/issue policy).
• A separate return stack for each thread for predicting subroutine return destinations.
• Per-thread instruction issue/retirement, instruction queue flush, and trap mechanisms.
• A thread id with each branch target buffer entry to avoid predicting phantom branches.
Modifications to Improve SMT performance:Modifications to Improve SMT performance:
• A larger rename register file, to support logical registers for all threads plus additional registers for register renaming. (may require additional pipeline stages).
• A higher available main memory fetch bandwidth may be required.
• Larger data TLB with more entries to compensate for increased virtual to physical address translations.
• Improved cache to offset the cache performance degradation due to cache sharing among the threads and the resulting reduced locality.
– e.g Private per-thread vs. shared L1 cache.
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202.
Example SMT Vs. Superscalar PipelineExample SMT Vs. Superscalar Pipeline
• The pipeline of (a) a conventional superscalar processor and (b) that pipeline modified for an SMT processor, along with some implications of those pipelines.
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,
Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202.
Based on the Alpha 21164
SMT-2
Two extra pipeline stages added for reg. Read/write to account for the size increase of the register file
SMT Performance ComparisonSMT Performance Comparison• Instruction throughput (IPC) from simulations by Eggers et al. at The University of Washington, using both multiprogramming and parallel workloads:
• The following machine models for a multithreaded CPU that can issue 8 instruction per cycle differ in how threads use issue slots and functional units:
• Fine-Grain Multithreading:– Only one thread issues instructions each cycle, but it can use the entire issue width of the
processor. This hides all sources of vertical waste, but does not hide horizontal waste. • SM:Full Simultaneous Issue:
– This is a completely flexible simultaneous multithreaded superscalar: all eight threads compete for each of the 8 issue slots each cycle. This is the least realistic model in terms of hardware complexity, but provides insight into the potential for simultaneous multithreading. The following models each represent restrictions to this scheme that decrease hardware complexity.
• SM:Single Issue,SM:Dual Issue, and SM:Four Issue:– These three models limit the number of instructions each thread can issue, or have active in the
scheduling window, each cycle.
– For example, in a SM:Dual Issue processor, each thread can issue a maximum of 2 instructions per cycle; therefore, a minimum of 4 threads would be required to fill the 8 issue slots in one cycle.
• SM:Limited Connection. – Each hardware context is directly connected to exactly one of each type of functional unit.
– For example, if the hardware supports eight threads and there are four integer units, each integer unit could receive instructions from exactly two threads.
– The partitioning of functional units among threads is thus less dynamic than in the other models, but each functional unit is still shared (the critical factor in achieving high utilization).
Possible Machine Models for an 8-way Multithreaded Processor
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
A comparison of key hardware complexity features of the various models (H=high complexity).
The comparison takes into account:– the number of ports needed for each register file, – the dependence checking for a single thread to issue multiple instructions,– the amount of forwarding logic, – and the difficulty of scheduling issued instructions onto functional units.
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
Simultaneous Vs. Fine-Grain Multithreading Performance
Instruction throughput as a function of the number of threads. (a)-(c) show the throughput by thread priority for particular models, and (d) shows the total throughput for all threads for each of the six machine models. The lowest segment of each bar is the contribution of the highest priority thread to the total throughput.
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
• Results for the multiprocessor MP vs. simultaneous multithreading SM comparisons.The multiprocessor always has one functional unit of each type per processor. In most cases the SM processor has the same total number of each FU type as the MP.
Simultaneous Multithreading (SM) Vs. Single-Chip Multiprocessing (MP)
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
Impact of Level 1 Cache Sharing on SMT PerformanceImpact of Level 1 Cache Sharing on SMT Performance • Results for the simulated cache configurations, shown relative to the
throughput (instructions per cycle) of the 64s.64p
• The caches are specified as:
[total I cache size in KB][private or shared].[D cache size][private or shared]
For instance, 64p.64s has eight private 8 KB I caches and a shared 64 KB data
Source: Simultaneous Multithreading: Maximizing On-Chip Parallelism Dean Tullsen et al., Proceedings of the 22rd Annual International Symposium on Computer Architecture, June 1995, pages 392-403.
SMT-1
Best overall performance of configurations consideredachieved by 64s.64s(64K instruction cache shared 64K data cache shared)
64K instruction cache shared64K data cache private (8K per thread)
The Impact of Increased Multithreading on Some Low LevelMetrics for Base SMT Architecture
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202.
SMT-2
More threads supported may lead to more demand on hardware resources(e.g here D and I miss rated increased substantially, and thus need to be resized)
– Instruction from Thread 1, then Thread 2, then Thread 3, etc. (eg RR 1.8 : each cycle one thread fetches up to eight instructions
RR 2.4 each cycle two threads fetch up to four instructions each)
• BR-Count: – Give highest priority to those threads that are least likely to be on a wrong path
by by counting branch instructions that are in the decode stage, the rename stage, and the instruction queues, favoring those with the fewest unresolved branches.
• MISS-Count:– Give priority to those threads that have the fewest outstanding Data cache misses.
• ICount:– Highest priority assigned to thread with the lowest number of instructions in
static portion of pipeline (decode, rename, and the instruction queues).
• IQPOSN:– Give lowest priority to those threads with instructions closest to the head of either
the integer or floating point instruction queues (the oldest instruction is at the head of the queue).
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202.
Instruction Throughput For Round Robin Instruction Fetch Scheduling
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202.
SMT-2
Best overall instruction throughput achieved using round robin RR.2.8(in each cycle two threads each fetch a block of 8 instructions)
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,
Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202. SMT-2
Workload: SPEC92
All other fetch heuristics provide speedup over round robinInstruction Count ICOUNT.2.8 provides most improvement5.3 instructions/cycle vs 2.5 for unmodified superscalar.
ICOUNT.2.8
ICOUNT: Highest priority assigned to thread with the lowest number of instructions in static portion of pipeline (decode, rename, and the instruction queues).
Possible SMT Instruction Issue PoliciesPossible SMT Instruction Issue Policies• OLDEST FIRST: Issue the oldest instructions (those
deepest into the instruction queue, the default).
• OPT LAST and SPEC LAST: Issue optimistic and speculative instructions after all others have been issued.
• BRANCH FIRST: Issue branches as early as possible in order to identify mispredicted branches quickly.
Source: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor,Dean Tullsen et al. Proceedings of the 23rd Annual International Symposium on Computer Architecture, May 1996, pages 191-202. SMT-2
Instruction issue bandwidth is not a bottleneck in SMT as shown above
ICOUNT.2.8 Fetch policy used for all issue policies above