Top Banner
Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University
42

Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Apr 02, 2018

Download

Documents

dinhcong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Graphical Processing Units and CUDA

Lecture for CPSC 5155

Edward Bosworth, Ph.D.

Computer Science Department

Columbus State University

Page 2: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

The Graphics Coprocessor

• From the earliest VGA designs, the graphics unit has been designed as a special purpose processor, attached to the CPU using a high-speed I/O-type link.

• There are many CPU instructions that a GPU need not implement. This simplifies design of a GPU.

• A modern NVIDIA graphics system would include a high-performance dual-processor main CPU, a few GB of local memory, a high-end disk drive, and one or more graphics cards.

Page 3: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Chapter 7 — Multicores, Multiprocessors, and Clusters — 3

Graphics in the System

Page 4: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

CS248 Lecture 14 Kurt Akeley, Fall 2007

Why does graphics hardware exist?

Special-purpose hardware tends to disappear over time

Lisp machines and CAD workstations of the 80s

CISC CPUs

iAPX432 (circa 1982)

www.dvorak.org/blog/

Symbolics Lisp Machines (circa 1984)

www.abstractscience.freeserve.co.uk/symbolics/photos/

Page 5: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Chapter 7 — Multicores, Multiprocessors, and Clusters — 5

GPU Architectures

Processing is highly data-parallel GPUs are highly multithreaded

Use thread switching to hide memory latency Less reliance on multi-level caches

Graphics memory is wide and high-bandwidth

Trend toward general purpose GPUs Heterogeneous CPU/GPU systems

CPU for sequential code, GPU for parallel code

Programming languages/APIs DirectX, OpenGL

C for Graphics (Cg), High Level Shader Language (HLSL)

Compute Unified Device Architecture (CUDA)

Page 6: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

CS248 Lecture 14 Kurt Akeley, Fall 2007

Why does graphics hardware exist?

Graphics acceleration has been around for 40 years.

Why do GPUs remain? Confluence of four things:

Performance differentiation

GPUs are much faster than CPUs at 3-D rendering tasks

Work-load sufficiency

The accelerated 3-D rendering tasks make up a significant portion of the overall processing (thus Amdahl’s law doesn’t limit the resulting performance increase).

Strong market demand

Customer demand for 3-D graphics performance is strong

Driven by the games market

Ubiquity

With the help of standardized APIs/architectures (OpenGL and Direct3D) GPUs have achieved ubiquity in the PC market

Inertia now works in favor of continued graphics hardware

Page 7: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

GPU and GPGPU

• GPU is a graphics processing unit

• Originally driven for better computer graphics performance

• GPUs were originally meant as graphics accelerator chips to help the CPU

• General Purpose GPU (GPGPU) programming refers to the now common case where the GPU can be used to accelerate other (non-graphical) calculations

7

Page 8: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

GPU Evolution (1)

• VGA – Video Graphics Array controllers – originally a memory controller and display generator connected to DRAM

• Variations in 1990’s to add more functionality • Circa 1997 3D accelerator functions: • Triangle setup and rasterization • Texture mapping and shading (decals) • GPU term coined circa 2000 when typical

graphics chip already did most of the standard graphics pipeline operations

8

Page 9: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

GPU Evolution (2)

• Programmable processor (cores) replaced fixed dedicated logic

• GPUs became massively parallel processors

• Floating point and (recently) double precision

• Hundreds of cores, thousands of threads…

• Recently become programmable in eg C++ and variants like CUDA and OpenCL…

9

Page 10: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Origin of CUDA

• The Compute Unified Device Architecture, developed by NVIDIA Corporation, arose from a series of experiments in the early 2000’s.

• Graphics processors were becoming very fast.

• It was discovered that many numerical simulation problems could be forced into a form that could be adapted to execute on a graphics card.

• The difficulty was that the GPU had to be controlled using an API designed for graphics.

Page 11: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

GPGPU and CUDA

• GPGPU stands for General Purpose computation on a Graphics Processing Unit.

• As mentioned above, this style used the traditional graphics API and graphics pipeline in a way that was only accidentally useful.

• The CUDA was developed intentionally to allow direct access to the graphics hardware, with programming in a variant of C/C++.

Page 12: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

GPU Trends

• Implement OpenGL and DirectX

• New GPUs every 12-18 months

• Coming together of parallel computing and graphics in a new and exciting way

• Heterogeneous computing:

– Data parallelism on the GPU

– More coarse-grained parallelism on the (multi-core) CPU

12

Page 13: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 13

Parallel Computing on a GPU

• 8-series GPUs deliver 25 to 200+ GFLOPS on compiled parallel C applications – Available in laptops, desktops, and clusters

• GPU parallelism is doubling every year

• Programming model scales transparently

• Programmable in C with CUDA tools

• Multithreaded SPMD model uses application data parallelism and thread parallelism

GeForce 8800

Tesla S870

Tesla D870

Page 14: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Chapter 7 — Multicores, Multiprocessors, and Clusters — 14

Example: NVIDIA Tesla

Streaming

multiprocessor

8 × Streaming

processors

Page 15: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE408, University of Illinois, Urbana-Champaign 15

Host

Vertex Control Vertex

Cache VS/T&L

Triangle Setup

Raster

Shader

ROP

FBI

Texture

Cache Frame

Buffer

Memory

CPU

GPU Host Interface

A Fixed Function

GPU Pipeline

Page 16: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE408, University of Illinois, Urbana-Champaign 16

3D Application

or Game

3D API:

OpenGL or

Direct3D

Programmable

Vertex

Processor

Primitive

Assembly

Rasterization &

Interpolation

3D API

Commands

Transformed

Vertices

Assembled

Polygons,

Lines, and

Points

GPU

Command &

Data Stream

Programmable

Fragment

Processor

Rasterized

Pre-transformed

Fragments

Transformed

Fragments

Raster

Operation

s

Framebuffer

Pixel

Updates GPU

Front

End

Pre-transformed

Vertices

Vertex Index

Stream

Pixel

Location

Stream

CPU – GPU Boundary

CPU

GPU

An example of separate vertex processor and fragment processor in

a programmable graphics pipeline

Programmable Vertex and Pixel

Processors

Page 17: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE408, University of Illinois, Urbana-Champaign 17

L2

FB

SP SP

L1

TF

Th

rea

d P

roc

es

so

r

Vtx Thread Issue

Setup / Rstr / ZCull

Geom Thread Issue Pixel Thread Issue

Data Assembler

Host

SP SP

L1

TF

SP SP

L1

TF

SP SP

L1

TF

SP SP

L1

TF

SP SP

L1

TF

SP SP

L1

TF

SP SP

L1

TF

L2

FB

L2

FB

L2

FB

L2

FB

L2

FB

Unified Graphics Pipeline

Page 18: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

CS248 Lecture 14 Kurt Akeley, Fall 2007

Multi-threading hides latency

struct {

float x,y,z,w;

float r,g,b,a;

} vertex;

struct {

float x,y,z,w;

float r,g,b,a;

} vertex;

Instruction

fetch and

execute

Memory reference

(or resulting data

dependency)

Ready

to

Run

Threads

Blocked

Threads

Processor stalls if no

threads are ready to run.

Possible result of large

thread context (too many

live registers)

Memory data available

(dependency resolved)

Page 19: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 19

Overview

• CUDA programming model – basic concepts and

data types

• CUDA application programming interface - basic

• Simple examples to illustrate basic concepts and

functionalities

• Performance features will be covered later

Page 20: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 20

CUDA – C with no shader limitations!

• Integrated host+device app C program

– Serial or modestly parallel parts in host C code

– Highly parallel parts in device SPMD kernel C code

Serial Code (host)‏

. . .

. . .

Parallel Kernel (device)‏

KernelA<<< nBlk, nTid >>>(args);

Serial Code (host)‏

Parallel Kernel (device)‏

KernelB<<< nBlk, nTid >>>(args);

Page 21: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 21

CUDA Devices and Threads

• A compute device

– Is a coprocessor to the CPU or host

– Has its own DRAM (device memory)‏

– Runs many threads in parallel

– Is typically a GPU but can also be another type of parallel processing

device

• Data-parallel portions of an application are expressed as device

kernels which run on many threads

• Differences between GPU and CPU threads

– GPU threads are extremely lightweight

• Very little creation overhead

– GPU needs 1000s of threads for full efficiency

• Multi-core CPU needs only a few

Page 22: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 22

Extended C • Declspecs

– global, device, shared,

local, constant

• Keywords

– threadIdx, blockIdx

• Intrinsics

– __syncthreads

• Runtime API

– Memory, symbol,

execution management

• Function launch

__device__ float filter[N];

__global__ void convolve (float *image) {

__shared__ float region[M];

...

region[threadIdx] = image[i];

__syncthreads()

...

image[j] = result;

}

// Allocate GPU memory

void *myimage = cudaMalloc(bytes)

// 100 blocks, 10 threads per block

convolve<<<100, 10>>> (myimage);

Page 23: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 23

gcc / cl

G80 SASS foo.sass

OCG

Extended C

cudacc EDG C/C++ frontend

Open64 Global Optimizer

GPU Assembly foo.s

CPU Host Code foo.cpp

Integrated source (foo.cu)

Mark‏Murphy,‏“NVIDIA’s Experience with

Open64,”

www.capsl.udel.edu/conferences/open64/2008

/Papers/101.doc

Page 24: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 24

Arrays of Parallel Threads

• A CUDA kernel is executed by an array of threads

– All threads run the same code (SPMD)‏

– Each thread has an ID that it uses to compute memory addresses and make control decisions

7 6 5 4 3 2 1 0

float x = input[threadID];

float y = func(x);

output[threadID] = y;

threadID

Page 25: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 25

float x =

input[threadID];

float y = func(x);

output[threadID] = y;

threadID

Thread Block 0

… …

float x =

input[threadID];

float y = func(x);

output[threadID] = y;

Thread Block 1

float x =

input[threadID];

float y = func(x);

output[threadID] = y;

Thread Block N - 1

Thread Blocks: Scalable Cooperation

• Divide monolithic thread array into multiple blocks

– Threads within a block cooperate via shared memory,

atomic operations and barrier synchronization

– Threads in different blocks cannot cooperate

7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0

Page 26: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 26

CUDA Memory Model Overview

• Global memory

– Main means of

communicating R/W

Data between host and

device

– Contents visible to all

threads

– Long latency access

• We will focus on global

memory for now

– Constant and texture

memory will come later

Grid

Global Memory

Block (0, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Block (1, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Host

Page 27: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 27

CUDA Device Memory Allocation

• cudaMalloc()

– Allocates object in the

device Global Memory

– Requires two parameters

• Address of a pointer to the

allocated object

• Size of of allocated object

• cudaFree()

– Frees object from device

Global Memory

• Pointer to freed object

Grid

Global

Memory

Block (0, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Block (1, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Host

Page 28: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 28

CUDA Host-Device Data Transfer • cudaMemcpy()‏

– memory data transfer

– Requires four parameters

• Pointer to destination

• Pointer to source

• Number of bytes copied

• Type of transfer

– Host to Host

– Host to Device

– Device to Host

– Device to Device

• Asynchronous transfer

Grid

Global

Memory

Block (0, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Block (1, 0)‏

Shared Memory

Thread (0, 0)‏

Registers

Thread (1, 0)‏

Registers

Host

Page 29: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 29

CUDA Function Declarations

host host __host__ float HostFunc()

host device __global__ void KernelFunc()

device device __device__ float DeviceFunc()

Only callable

from the:

Executed

on the:

• __global__ defines a kernel function

– Must return void

• __device__ and __host__ can be used

together

Page 30: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 30

CUDA Function Declarations (cont.)‏

• __device__ functions cannot have their

address taken

• For functions executed on the device:

– No recursion

– No static variable declarations inside the function

– No variable number of arguments

Page 31: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Sample Problem: Matrix Multiply

• In this section, we take a simple problem from standard sequential computation and adapt it for optimal execution on a CUDA device.

• Let A, B, and C be N-by-N square matrices, with each index in the range [0, (N-1)].

• The original code uses a triple loop, so its time complexity is O(N3).

• Note the use of variable SUM to avoid multiple references to C[I][J].

Page 32: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

The Sequential Code

For I = 0 to (N – 1) Do

For J = 0 to (N – 1) Do

Sum = 0 ;

For K = 0 to (N – 1) Do

SUM = SUM + A[I][K]B[K][J] ;

End For

C[I][J] = SUM ;

End For

End For

Page 33: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 33

M2,0

M1,1

M1,0 M0,0

M0,1

M3,0

M2,1 M3,1

Memory Layout of a Matrix in C

M2,0 M1,0 M0,0 M3,0 M1,1 M0,1 M2,1 M3,1 M1,2 M0,2 M2,2 M3,2

M1,2 M0,2 M2,2 M3,2

M1,3 M0,3 M2,3 M3,3

M1,3 M0,3 M2,3 M3,3

M

Page 34: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

1D Representation of a 2D Array

• Assume a 2D array A[N][N] laid out in row major order, as above.

• The array can be accessed either as a 2D array or as a 1D array.

• The element A[I][J] is referenced in one dimension as A[IN + J].

• This transformation is exactly what a modern compiler will do in handling the array access.

Page 35: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Multiplication with 1D Arrays

For I = 0 to (N – 1) Do

For J = 0 to (N – 1) Do

Sum = 0 ;

For K = 0 to (N – 1) Do

SUM = SUM + A[IN + K]B[KN + J] ;

End For

C[IN + J] = SUM ;

End For

End For

Page 36: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Efficiency in Computing the Index

• Consider the statement SUM = SUM + A[IN + K]B[KN + J]

• This involves two multiplications to generate the indices into the arrays A and B.

• In general, we want to avoid multiplication when there is a simpler approach that is obvious and easy to understand.

• We now evolve the more efficient algorithm.

Page 37: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Modifying the Index Calculation This modification affects only the inner loop of the example code. The original code is For K = 0 to (N – 1) Do

SUM = SUM + A[IN + K]B[KN + J] ;

End For

We now modify that code as follows For K = 0 to (N – 1) Do

L = IN + K ;

M = KN + J ;

SUM = SUM + A[L]B[M] ;

End For

Page 38: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

Sequence of the Indices • Here we watch L and M as K is incremented. For K = 0 to (N – 1) Do

L = IN + K ;

M = KN + J ;

SUM = SUM + A[L]B[M] ;

End For

For K = 0 L = IN M = J

For K = 1 L = IN + 1 M = J + N

For K = 2 L = IN + 2 M = J + 2N

For K = 3 L = IN + 3 M = J + 3N

Page 39: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

The Optimized Sequential Code

For I = 0 to (N – 1) Do

For J = 0 to (N – 1) Do

Sum = 0 ;

L = IN ;

M = J ;

For K = 0 to (N – 1) Do

SUM = SUM + A[L]B[M] ;

L = L + 1 ;

M = M + N ;

End For

C[IN + J] = SUM ;

End For

End For

Page 40: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

A Square Array of Processors

• Processor P[I][J] handles array element C[I][J] Sum = 0 ;

L = IN ;

M = J ;

INJ = L + M ; // This is IN + J.

For K = 0 to (N – 1) Do

SUM = SUM + A[L]B[M] ;

L = L + 1 ;

M = M + N ;

End For

C[INJ] = SUM ; // This is C[I][J]

Page 41: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2010

ECE 498AL, University of Illinois, Urbana-Champaign 41

Host

Kernel

1

Kernel

2

Device

Grid 1

Block

(0, 0)

Block

(1, 0)

Block

(0, 1)

Block

(1, 1)

Grid 2

Courtesy: NDVIA

Figure 3.2. An Example of CUDA Thread Organization.

Block (1, 1)

Thread

(0,1,0)

Thread

(1,1,0)

Thread

(2,1,0)

Thread

(3,1,0)

Thread

(0,0,0)

Thread

(1,0,0)

Thread

(2,0,0)

Thread

(3,0,0)

(0,0,1) (1,0,1) (2,0,1) (3,0,1)

Block IDs and Thread IDs

• Each thread uses IDs to decide what data to work on

– Block ID: 1D or 2D

– Thread ID: 1D, 2D, or 3D

• Simplifies memory addressing when processing multidimensional data

– Image processing

– Solving PDEs on volumes

– …

Page 42: Graphical Processing Units and CUDA - Edward Bosworth€¦ · Graphical Processing Units and CUDA Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department ... •GPU

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009

ECE498AL, University of Illinois, Urbana-Champaign

42

Revised Matrix Multiplication

Kernel using Multiple Blocks __global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width)

{ // Calculate the row index of the Pd element and M

int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N

int Col = blockIdx.x*TILE_WIDTH + threadIdx.x;

float Pvalue = 0; // each thread computes one element of the block sub-matrix

for (int k = 0; k < Width; ++k)

Pvalue += Md[Row*Width+k] * Nd[k*Width+Col];

Pd[Row*Width+Col] = Pvalue;

}