CSE 1302 Lecture 21 Exception Handling and Parallel Programming Richard Gesick.

Post on 16-Dec-2015

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

CSE 1302

Lecture 21

Exception Handling and Parallel Programming

Richard Gesick

Exception Handling

•An exception is an indication of a problem that occurs during a program’s execution.•Exception handling enables applications to resolve exceptions.•Exception handling enables clear, robust and more fault-tolerant programs.•Exception handling helps improve a program’sfault tolerance.

14-2

Exception Handling

1-3

ArithmeticException A base class for exceptions that occur during arithmetic operations, such as DivideByZeroException and OverflowException.

ArrayTypeMismatchException

Thrown when an array cannot store a given element because the actual type of the element is incompatible with the actual type of the array.

DivideByZeroException Thrown when an attempt is made to divide an integral value by zero.

IndexOutOfRangeException Thrown when an attempt is made to index an array when the index is less than zero or outside the bounds of the array.

InvalidCastException Thrown when an explicit conversion from a base type to an interface or to a derived type fails at runtime.

NullReferenceException Thrown when you attempt to reference an object whose value is null.

OutOfMemoryException Thrown when an attempt to allocate memory using the new operator fails. This indicates that the memory available to the Common Language Runtime has been exhausted.

OverflowException Thrown when an arithmetic operation in a checked context overflows.

StackOverflowException Thrown when the execution stack is exhausted by having too many pending method calls; usually indicates a very deep or infinite recursion.

TypeInitializationException Thrown when a static constructor throws an exception and no compatible catch clause exists to catch it.

Exception HandlingConsider the following pseudocode:

Perform a task

If the preceding task did not execute correctlyPerform error processing

Perform next task

If the preceding task did not execute correctlyPerform error processing

In this pseudocode, we begin by performing a task; then we test whether that task executed correctly. If not, we perform error processing.

14-4

Exception Handling

int SafeDivision(int x, int y)

{

try

{

return (x / y);

}

catch (System.DivideByZeroException dbz)

{

System.Console.WriteLine("Division by zero attempted!");

return 0;

}

}

1-5

Exception Handling

Exception handling enables programmers to remove error-handling code from the “main line” of the program’s execution.

Programmers can decide to handle all exceptions, all exceptions of a certain type or all exceptions of related types.

Such flexibility reduces the likelihood that errors will be overlooked.

14-6

try Block

• A try block encloses code that might throw exceptions and code that is skipped when an exception occurs.

try{  open file  perform work on file  close file}

14-7

Catch BlockWhen an exception occurs in a try block, a

corresponding catch block catches the exception and handles it.

At least one catch block must immediately follow a try block.

A catch block specifies an exception parameter representing the exception that the catch block can handle.

Optionally, you can include a catch block that does not specify an exception type to catch all exception types.

14-8

Catch Block

catch(IO.FileNotFoundException fnfe){  handle file not found (using fnfe object)}catch(Exception e){  handle other type of exception (using e object)  close file}

14-9

Exception Handling- Termination Model When a method called in a program or the CLR detects

a problem, the method or the CLR throws an exception.

The point at which an exception occurs is called the throw point

If an exception occurs in a try block, program control immediately transfers to the first catch block matching the type of the thrown exception.

After the exception is handled, program control resumes after the last catch block.

14-10

finally BlockPrograms frequently request and release resources

dynamically. Operating systems typically prevent more than one program from manipulating a file.

Therefore, the program should close the file(i.e., release the resource) so other programs can use it. If the file is not closed, a resource leak occurs.

The finally block is guaranteed to execute regardless of whether an exception occurs.

14-11

finally Block

•Local variables in a try block cannot be accessed in the corresponding finally block, so variables that must be accessed in both should be declared before the try block.

14-12

Reminder

•Do not place try blocks around every statement that might throw an exception. •It’s better to place one try block around a significant portion of code, and follow this try block with catch blocks that handle each possible exception. •Then follow the catch blocks with a single finally block. •Separate try blocks should be used when it is important to distinguish between multiple statements that can throw the same exception type.

14-13

Parallel Computing

•Parallel computing is a form of computation in which many operations are carried out simultaneously.

•Many personal computers and workstations have two or four cores which enable them to execute multiple threads simultaneously. Computers in the near future are expected to have significantly more cores.

14-14

Parallel Computing

•To take advantage of the hardware of today and tomorrow, software developers can parallelize their code to distribute work across multiple processors.

• In the past, parallelization required low-level manipulation of threads and locks.

1-15

Task Parallel Library•The Task Parallel Library (TPL) is a set of public types and APIs in the System.Threading namespace in the .NET Framework. This relies on a task scheduler that is integrated with the .NET ThreadPool. •The purpose of the TPL is to make developers more productive by simplifying the process of adding parallelism and concurrency to applications.

.

14-16

Task Parallel Library

•The TPL scales the degree of concurrency dynamically to most efficiently use all the processors that are available. Parallel code that is based on the TPL not only works on dual-core and quad-core computers. It will also automatically scale, without recompilation, to many-core computers.

14-17

Task Parallel Library

When you use the TPL, writing a multithreaded for loop closely resembles writing a sequential for loop.

The following code automatically partitions the work into tasks, based on the number of processors on the computer.

1-18

The Parallel for and foreach loops

Parallel.For(startIndex, endIndex, (currentIndex) => DoSomeWork(currentIndex));

•// Sequential version foreach (var item in sourceCollection) { Process(item); }

•// Parallel equivalent Parallel.ForEach(sourceCollection, item =>

Process(item));

14-19

Code segments

Use .Net Parallel Extensions (Task Parallel Library) and then see the sample/tutorial on how to get this working in the VS IDE.

The following example uses lambda expressions, but you can assume this is "magic syntax" for now.  You could also use delegates syntax if you'd like.

14-20

Code segments

You should see increased performance (reduced time to complete) relative to the number of processors/cores you have on the machine.

The final example below is using a shared "results" collection, so there is significant blocking and reduced performance.  How might we solve this?

14-21

top related