Real Time Operating System Murtadha Al-Sabbagh
Real Time Operating System
Murtadha Al-Sabbagh
Definition Charecteristics ComponentsA. SchedulerB. Objects• Tasks• Semaphores• Message QueuesA. Services• Interrupt Management
Overview
A real-time operating system (RTOS) is a program that
1. schedules execution in a timely manner.
2. manages system resources.3. provides a consistent foundation for
developing application code. Application code designed on an RTOS can
be quite diverse,for example ranging from digital stopwatch to a much more complex application for aircraft navigation.
Definition
A real time system
Need for OS in Real Time Systems Standalone Applications
1. Often no OS involved2. Micro controller based Embedded Systems
Some Real Time Applications are huge & complex1. Multiple threads2. Complicated Synchronization Requirements3. Filesystem / Network support4. OS primitives reduce the software design time
Reliability:Depending on the application, the system might need to operate for long periods without human intervention.
Predictability:the completion of operating system calls occurs within known timeframes.
Performance :This requirement dictates that an embedded system must perform fast enough to fulfill its timing requirements.
Compactness: Application design constraints and cost constraints help determine how compact an embedded system can be.
Scalability:they must be able to scale up or down to meet application-specific requirements.
Key Characteristics of an RTOS
Most RTOS kernels contain the following components: Scheduler-is contained within each kernel and
follows a set of algorithms that determines which task executes when
Objects-are special kernel constructs that help developers create applications for real-time embedded systems.
Services-are operations that the kernel performs on an object or, generally operations such as timing, interrupt handling, and resource management.
RTOS Components
RTOS Components
The Scheduler
The scheduler is at the heart of every kernel. A scheduler provides the algorithms needed to determine which task executes when.
Scheduler Important topics :1. Context Switching2. Dispatcher3. Scheduling Algorithms
The Scheduler
The Context Switch
TCB
Context Switching Operation
TCBs : are system data structures that the kernel uses to maintain task-specific information
The kernel saves task 1’s context information in its TCB.
It loads task 2’s context information from its TCB, which becomes the current thread of execution.
The context of task 1 is frozen while task 2 executes, but if the scheduler needs to run task 1 again, task 1 continues from where it left off just before the context switch.
The Context Switch
The dispatcher is the part of the scheduler that performs context switching and changes the flow of execution. At any time an RTOS is running, the flow of execution, also known as flow of control, is passing through one of three areas: through an application task, through an ISR, or through the kernel. When a task or ISR makes a system call, the flow of control passes to the kernel to execute one of the system routines provided by the kernel.
The Dispatcher
preemptive priority-based scheduling.
with this type of scheduling, the task that gets to run at any point is the task with the highest priority among all other tasks ready to run in the system.
round-robin scheduling.Pure round-robin scheduling cannot satisfy real-time system requirements because in real-time systems, tasks perform work of varying degrees of importance. Instead, preemptive, priority-based scheduling can be augmented with round-robin scheduling
Scheduling Algorithms
Objects
We will discuss the following objects : Tasks Semaphores Message Queues Pipes
Objects
A task or a process is an independent thread of execution that can compete with other concurrent tasks for processor execution time.
*Tasks
Task States
ready state-the task is ready to run but cannot because a higher priority task is executing.
waiting state-the task has requested a resource that is not available, has requested to wait until some event occurs, or has delayed itself for some duration.
running state-the task is the highest priority task and is running.
Task States
A semaphore is a kernel object that one or more threads of execution can acquire or release for the purposes of synchronization or mutual exclusion.
*Semaphores
Semaphore Types are:1. Binary2. Counting3. Mutex
Semaphore Types
A binary semaphore can have a value of either 0 or 1. When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty); when the value is 1, the binary semaphore is considered available (or full ). Note that when a binary semaphore is first created, it can be initialized to either available or unavailable (1 or 0, respectively).
Binary Semaphores
A counting semaphore uses a count to allow it to be acquired or released multiple times. When creating a counting semaphore, assign the semaphore a count that denotes the number of semaphore tokens it has initially.
Counting Semaphores
A mutual exclusion (mutex) semaphore is a special binary semaphore that supports ownership, recursive access, task deletion safety, and one or more protocols for avoiding problems inherent to mutual exclusion.
Mutex
Ownership of a mutex is gained when a task first locks the mutex by acquiring it. Conversely, a task loses ownership of the mutex when it unlocks it by releasing it. When a task owns the mutex, it is not possible for any other task to lock or unlock that mutex. Contrast this concept with the binary semaphore, which can be released by any task, even a task that did not originally acquire the semaphore.
Thus is it is useful to provide mutual exclusion.
Mutex Ownership
Some mutex implementations also have built-in task deletion safety. Premature task deletion is avoided by using task deletion locks when a task locks and unlocks a mutex. Enabling this capability within a mutex ensures that while a task owns the mutex, the task cannot be deleted. Typically protection from premature deletion is enabled by setting the appropriate initialization options when creating the mutex.
Task Deletion Safety
Recursive locking , which allows the task that owns the mutex to acquire it multiple times in the locked state. Depending on the implementation, recursion within a mutex can be automatically built into the mutex, or it might need to be enabled explicitly when the mutex is first created.
This type of mutex is most useful when a task requiring exclusive access to a shared resource calls one or more routines that also require access to the same resource. A recursive mutex allows nested attempts to lock the mutex to succeed, rather than cause deadlock
Recursive Locking
Binary Semaphore Operation
Mutex Operation
A message queue is a buffer-like object through which tasks and ISRs send and receive messages to communicate and synchornize with data .
It temporarily holds messages from a sender until the intended receiver is ready to read them.
*Message Queues
Message Queues
QCB consist of a name, a unique ID, memory buffers, a message queue length, a maximum message length, and one or more task-waiting lists.
Queue Control Block
The message queue itself consists of a number of elements, each of which can hold a single message. The elements holding the first and last messages are called the head and tail respectively. Some elements of the queue may be empty (not containing a message). The total number of elements (empty or not) in the queue is the total length of the queue . The developer specified the queue length when the queue was created.
Message Queues
Message queues used to hold the following values for example:
a temperature value from a sensor, a text message to print to an LCD, a keyboard event a data packet to send over the network.
Message Queue Usage
Message Queue States
When a message queue is first created, the FSM is in the empty state. If a task attempts to receive messages from this message queue while the queue is empty, the task blocks and, if it chooses to, is held on the message queue's task-waiting list.
In this scenario, if another task sends a message to the message queue, the message is delivered directly to the blocked task. The blocked task is then removed from the task-waiting list and moved to either the ready or the running state. The message queue in this case remains empty because it has successfully delivered the message.
Message Queue States
Pipes are kernel objects that provide unstructured data exchange and facilitate synchronization among tasks. In a traditional implementation, a pipe is a unidirectional data exchange facility.
*Pipes
Pipe Control Block
Pipe States
Pipe Control Block And States
Unlike a message queue, a pipe does not store multiple messages. Instead, the data that it stores is not structured, but consists of a stream of bytes. Also, the data in a pipe cannot be prioritized; the data flow is strictly first-in, first-out FIFO.
Also Pipes Can’t broadcast since a reader destroy stored data , so it can’t be sent to multiple receivers.
Pipes support the powerful select operation, and message queues do not.
select on a pipe :The select operation allows a task to block and wait for a specified condition to occur on one or more pipes.
Pipes And message queues comparison
Services
Along with objects, most kernels provide services that help developers create applications for real-time embedded systems. These services comprise sets of API calls that can be used to perform operations on kernel objects or can be used in general to facilitate timer management, interrupt handling, device I/O, and memory management. Again, other services might be provided; these services are those most commonly found in RTOS kernels.
Services
RTOS provide Services like: Interrupt Handling. Device I/O. Memory Management. Other Services1. TCP/IP protocol Stack2. file system component, 3. remote procedure call component,
Services in RTOS
An Interrupt (Exception) is any event that disrupts the normal execution of the processor and forces the processor into execution of special instructions in a privileged state.
Interrupts raised by internal events are called synchronous Interrupt.
Interrupts raised by external events are called asynchronous Interrupts. In general, these external events are associated with hardware signals.
*Interrupts
Most embedded designs have more than one source of external interrupts, and these multiple external interrupt sources are prioritized,so we use programmable interrupt controller to deal with them.
The PIC is implementation-dependent. It can appear in a variety of forms and is sometimes given different names, however, all serve the same purpose and provide two main functionalities:
1. Prioritizing multiple interrupt sources so that at any time the highest priority interrupt is presented to the core CPU for processing.
2. Offloading the core CPU with the processing required to determine an interrupt's exact source.
Programmable Interrupt Controllers
Programmable Interrupt Controller
Programmable Interrupt Controllers
Source Priority Vector Address
IRQ
Max Freq .
Description
Airbag Sensor
Highest 14h 8 N/A Deploys airbag
Break Sensor
High 18h 7 N/A Deploys the breaking system
Fuel Level Sensor
Med 1Bh 6 20Hz Detects the level of gasoline
Real-Time Clock
Low 1Dh 5 100Hz Clock runs at 10ms ticks
The highest priority level of Interrupts is usually reserved for system resets, other significant events, or errors that warrant the overall system to reset.
The next two priority levels reflect a set of errors and special execution conditions internal to the processor. A synchronous Interrupt is generated and acknowledged only at certain states of the internal processor cycle. The sources of these errors are rooted in either the instructions or data that is passed along to be processed.
Typically, the lowest priority is an asynchronous Interrupts external to the core processor. External interrupts (except NMIs) are the only exceptions that can be disabled by software.
General Interrupt Priorities
From an application point of view, all exceptions have processing priority over operating system objects, including tasks, queues, and semaphores. illustrates a general priority framework observed in most embedded computing architectures.
The embedded systems programmer, when designing and implementing an ISR, should be aware of the interrupt frequency of each device that can assert an interrupt.
an ISR, when executing with interrupts disabled, can cause the system to miss interrupts if the ISR takes too long.
Interrupt Timing
Thanks~_^