Top Banner
High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook from Wayne Wolf
37

High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

Dec 27, 2015

Download

Documents

Abner Atkins
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

High Performance Embedded Computing© 2007 Elsevier

Lecture 14: Real Time Concepts

Embedded Computing SystemsMikko Lipasti

Based on slides and textbook from Wayne Wolf

Page 2: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Topics

Ch. 4 in textbook Real-time scheduling. Scheduling for power/energy. Operating systems mechanisms and

overhead

Page 3: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Real-time scheduling terminology Process: unique execution of a program Context switch: operating system switch from one

process to another. Time quantum: time between OS interrupts. Schedule: sequence of process executions or

context switches. Thread: process that shares address space with

other threads. Task: a collection of processes. Subtask: one process in a task.

Page 4: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Real-time scheduling algorithms Static scheduling algorithms determine the

schedule off-line. Constructive algorithms don’t have a complete

schedule until the end of the algorithm. Iterative improvement algorithms build a

schedule, then modify it. Dynamic scheduling algorithms build the

schedule during system operation. Priority schedulers assign priorities to processes. Priorities may be static or dynamic.

Page 5: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Timing requirements

Real-time systems have timing requirements. Hard: missing a deadline causes system failure. Soft: missing a deadline does not cause failure.

Deadline: time at which computation must finish.

Release time: first time that computation may start.

Period (T): interval between deadlines. Relative deadline: release time to deadline.

Page 6: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Timing behavior

Initiation time: time when process actually starts executing.

Completion time: time when process finishes. Response time = completion time – release

time. Execution time (C): amount of time required

to run the process on the CPU.

Page 7: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Utilization

Total execution time C required to execute processes 1..n is the sum of the Cis for the processes.

Given available time t, utilization U = C/t. Generally expressed as a percentage. CPU can’t deliver more than 100% utilization.

Page 8: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Static scheduling algorithms

Often take advantage of data dependencies. Resource dependencies come from the

implementation. As-soon-as-possible (ASAP): schedule each

process as soon as data dependencies allow. As-late-as-possible (ALAP): schedule each

process as late as data dependencies and deadlines allow.

Page 9: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

List scheduling

A common form of constructive scheduler.

Page 10: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Priority-driven scheduling

Each process has a priority. Processes may be ready or waiting. Highest-priority ready process runs in the

current quantum. Priorities may be static or dynamic.

Page 11: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Rate-monotonic scheduling

Liu and Layland: proved properties of static priority scheduling.

No data dependencies between processes. Process periods may have arbitrary

relationships. Ideal (zero) context switching time. Release time of process is start of period. Process execution time is fixed.

Page 12: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Critical instant

Page 13: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Critical instant analysis

Process 1 has shorter period, process 2 has longer period.

If process 2 has higher priority, then: Schedulability condition:

Utilization is:

Utilization approaches:

Page 14: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Earliest-deadline-first (EDF) scheduling Liu and Layland: dynamic priority algorithm.

Process closest to its deadline has highest priority. Relative deadline D. Process set must satisfy:

Page 15: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Least-laxity-first (LLF) scheduling Laxity or slack: difference between remaining

computation time and time until deadline. Process with smallest laxity has highest priority.

Unlike EDF, takes into account computation time in addition to deadline.

Page 16: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Priority inversion

RMS and EDF assume no dependencies or outside resources.

When processes use external resources, scheduling must take those into account.

Priority inversion: external resources can make a low-priority process continue to execute as if it had higher priority.

Page 17: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Priority inversion example

Page 18: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Priority inheritance protocols

Sha et al.: basic priority inheritance protocol, priority ceiling protocol.

Process in a critical section executes at highest priority of any process that shares that critical section. Can deadlock.

Priority ceiling protocol: each semaphore has its own priority ceiling. Required priority to obtain semaphore depends on priorities

of other locked semaphores. Schedulability:

Page 19: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Scheduling for dynamic voltage scaling Dynamic voltage scaling (DVS): change

processor voltage to save power. Power consumption goes down as V2,

performance goes down as V. Must make sure that the process finishes its

deadline.

Page 20: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Yao et al. DVS for real-time

Intensity of an interval defines lower bound on average speed required to create a feasible schedule.

Interval that maximizes the intensity is the critical interval.

Optimal schedule is equal to the intensity of the critical interval.

Average rate heuristic:

Page 21: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

DVS with discrete voltages

Ishihara and Yasuura: two voltage levels are sufficient if a finite set of discrete voltage levels are used.

Page 22: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Procrastination scheduling

Family of algorithms that maximizes lengths of idle periods. CPU can be turned off during idle periods, further

reducing energy consumption.

Jejurkar et al.: Power consumption P = PAC + PDC + Pon.

Minimum breakeven time tth = Esd/Pidle. Guarantees deadlines if:

Page 23: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Performance estimation

Multiple processes interfere in the cache. Single-process performance evaluation cannot

take into account the effects of a dynamic schedule.

Kirk and Strosnider: segment the cache, allow processes to lock themselves into a segment.

Mueller: use software methods to partition.

Page 24: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Cache modeling and scheduling Li and Wolf: each process has a stable footprint in

the cache. Two-state model:

Process is in the cache. Process is not in the cache.

Characterize execution time in each state off-line. Use CPU time measurements along with cache

state to estimate process performance at each quantum.

Kastner and Thiesing: scheduling algorithm takes cache state into account.

Page 25: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

General-purpose vs. real-time OS Schedulers have very different goals in real-

time and general-purpose operating systems: Real-time scheduler must meet deadlines. General-purpose scheduler tries to distribute time

equally among processes. Early real-time operating systems:

Hunter/Ready OS for microcontrollers was developed in early 1980s.

Mach ran on VAX, etc., provided real-time characteristics on large platforms.

Page 26: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Memory management

Memory management allows RTOS to run outside applications. Cell phones run downloaded, user-installed

programs. Memory management helps the RTOS

manage a large virtual address space. Flash may be used as a paging device.

Page 27: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Windows CE memory management Flat 32-bit address space. Top 2 GB for kernel.

Statically mapped. Bottom 2 GB for user processes.

Page 28: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

WinCE user memory space

64 slots of 32 MB each. Slot 0 is currently

running process. Slots 1-33 are the

processes. 32 processes max.

Object store, memory mapped files, resource mappings.

Slot 0: current process

Slot 1: DLLs

Slot 2: process

Slot 3: process

Slots 33-62: object store,memory mapped files

Slot 63: resource mappings

Page 29: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Mechanisms for real time operation Two key mechanisms for real time:

Interrupt handler. Scheduler.

Interrupt handler is part of the priority system. Also introduces overhead.

Scheduler determines ability to meet deadlines.

Page 30: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Interrupt handling in RTOSs

Interrupts have priorities set in hardware. These priorities supersede process priorities of the

processes. We want to spend as little time as possible in the

hardware priority space to avoid interfering with the scheduler.

Two layers of processing: Interrupt service routine (ISR) is dispatched by hardware. Interrupt service thread (IST) is a process.

Spend as little time in the ISR (hardware priorities), do most of the work in the IST (scheduler priorities).

Page 31: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Windows CE interrupts

Two types of ISRs: Static iSRs are built into kernel, one-way

communication to IST. Installable ISR can be dynamically loaded, uses

shared memory to communicate with IST.

Page 32: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Static ISR

Built into the kernel. SHx and MIPS must be written in assembler, limited

register availability. One-way communication from ISR to IST.

Can share a buffer but location must be predefined. Nested ISR support based on CPU, OEM’s

initialization. Stack is provided by the kernel.

Page 33: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Installable ISR

Can be dynamically loaded into kernel. Loads a C DLL. Can use shared memory for communication. ISRs are processed in the order they were

installed. Limited stack size.

Page 34: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

WinCE 4.x interrupts

All higherenabled

HW

kernelO

AL

I-ISR

thread

All enabledExcept ID

All enabled

ISH Set event Enable ID

ISR ISR

ISR ISR

IST processing

device

Page 35: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Interprocess communication

IPC often used for large-scale communication in general-purpose systems.

Mailboxes are specialized memories, used for small, fast transfers.

Multimedia systems can be supported by quality-of-service (QoS) oriented interprocess communication services.

Page 36: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Power management

Advanced Configuration and Power Management (ACPI) standard defines power management levels: G3 mechanical off. G2 soft off. G1 sleeping. G0 working. Legacy state.

Page 37: High Performance Embedded Computing © 2007 Elsevier Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook.

© 2006 Elsevier

Summary

Ch. 4 in textbook Real-time scheduling. Scheduling for power/energy. Operating systems mechanisms and

overhead