Top Banner
Point 1: Deliverables of RTOS Deepak Malik You can use Real-Time Operating Systems (RTOS) in time- and mission-critical systems, such as missile and satellite launch systems. Before selecting an RTOS for your application, you must address critical factors such as scheduling, resource access control, and communication between components and subsystems of real-time systems. This Reference Point describes the features of RTOS. It also explains resource availability, CPU utilization, virtual memory management, security, interprocess and real-time communication, and task management. Understanding Resource Availability An operating system has resources that more than one process use simultaneously. This is why an operating system requires proper resource management. You must implement policies for the mutually exclusive allocation of resource to a process. A resource conflict arises when more than one process requires the same resource. Other processes that require the resource contend with the process that is already using the resource. The operating system has a scheduler that limits the number of processes that share a particular resource. A process that requires a resource for a specific time interval requests the operating system. If the system accepts the request, the process tries to lock the resource for the required time interval by using a lock request. If the lock request fails, the system blocks and removes the process from the ready process queue. The process remains blocked until the scheduler grants it the resource for the requested time interval. When the scheduler grants the resource, it is unblocked and moves to the ready process queue.
32
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: RTOS

Point 1: Deliverables of RTOS Deepak Malik

You can use Real-Time Operating Systems (RTOS) in time- and mission-critical systems, such as missile and satellite launch systems. Before selecting an RTOS for your application, you must address critical factors such as scheduling, resource access control, and communication between components and subsystems of real-time systems.

This Reference Point describes the features of RTOS. It also explains resource availability, CPU utilization, virtual memory management, security, interprocess and real-time communication, and task management.

Understanding Resource AvailabilityAn operating system has resources that more than one process use simultaneously. This is why an operating system requires proper resource management. You must implement policies for the mutually exclusive allocation of resource to a process.

A resource conflict arises when more than one process requires the same resource. Other processes that require the resource contend with the process that is already using the resource. The operating system has a scheduler that limits the number of processes that share a particular resource.

A process that requires a resource for a specific time interval requests the operating system. If the system accepts the request, the process tries to lock the resource for the required time interval by using a lock request. If the lock request fails, the system blocks and removes the process from the ready process queue. The process remains blocked until the scheduler grants it the resource for the requested time interval. When the scheduler grants the resource, it is unblocked and moves to the ready process queue.

The scheduler uses an access control protocol to:

Schedule the required resources. Decide what conditions should be met before it grants access to a resource.

Determine the time intervals to grant access to a resource.

For example, three processes, P1, P2, and P3, have priority decreasing from P1 to P3. At time 0, P3 executes. The system allocates the resource to P3 because no other request for it is present. Then P1 starts and sends a request for the resource that is already allocated to P3. The process, P1, must wait although it has a higher priority. This situation is called priority inversion. While P3 executes, process P2 is activated because it has a priority that is higher than P3 but lower than P1. The priority inversion period is further increased because P2 does not require the resource. This mechanism blocks a process.

Page 2: RTOS

Figure 1-1-1 shows priority inversion:

Figure 1-1-1: Priority Inversion

Algorithms for Resource Allocation

The operating system must control priority inversion. To solve resource conflicts, an RTOS provides various algorithms, such as:

Non pre-emptive Critical Section Protocol (NCSP) Priority Inheritance Protocol (PIP)

Basic Priority Ceiling Protocol (BPCP)

Ceiling Priority Protocol (CPP)

NCSP

According to NCSP, any process or processes can be suspended or blocked only once. Once the scheduler allocates the resource to a process, the resource is never pre-empted for any other process. When a process holds a resource, it is executed at a priority that is higher than that of all other processes that are waiting for the same resource.

The duration for which a process occupies a resource is called the critical time of the resource. The maximum time for which a system can block a process is equal to the sum of the critical times of all lower-priority processes. For example, of the three processes, P1, P2, and P3. P1 has the highest priority and P3 has the lowest. The P3 process executes first and the requisite resource is allocated to it. After some time, process P1 becomes ready and requests the system for the resource that is allocated to P3. According

Page 3: RTOS

to this algorithm, P1, which has a higher priority is kept waiting for the resource until P3 releases it.

Figure 1-1-2 shows the application of the NCSP algorithm:

Figure 1-1-2: Application of NCSP

NCSP delays the execution of a higher priority process, which must wait for a resource that is already allocated to a lower-priority process.

PIP

PIP does not require prior knowledge of the resource requirements of processes. The priority by which a process is executed is called its current priority, which can differ from its assigned priority. The priority that is assigned to a process can be assigned to another process and is called an inherited priority of the inheriting process.

PIP solves resource-allocation problems according to the following rules:

Priority inheritance: When the scheduler blocks a process that is sending a request for a resource, the process that currently holds the resource inherits the priority of the blocked process. The process that holds the resource is executed at the inherited priority until it releases the resource.

Scheduling: The scheduler arranges processes according to current priorities.

Allocation: Whenever a process requests a resource, the operating system allocates the resource to the requesting process for the duration. If the requested resource is not free, the operating system denies the request and blocks the process.

Page 4: RTOS

For example, there are three processes, P1, P2, and P3. Suppose that P1 has the highest priority and P3 has the lowest priority. The system starts executing P3 first and a resource is allocated to P3. After some time, P1 becomes ready and it requests the resource that is allocated to P3. According to PIP, P3, which has a lower priority, inherits the priority of P1. P2 cannot pre-empt P3 because the priority of P2 is lower than the inherited priority of process P3.

Figure 1-1-3 shows the application of PIP:

Figure 1-1-3: Application of PIP

This protocol ensures that the duration of priority inversion is less than the length of the critical section of the inheriting process.

BPCP

BPCP is an enhancement over PIP in terms of the blocking of higher-priority processes. This protocol assumes that the resource requirement of a process is available in advance before the system executes the process.

BPCP solves the resource-allocation problems according to the following rules:

Scheduling: Assigns priorities to processes at the time of the release of processes. Processes are resources that are allocated, based on the priority level.

Allocation: Rejects the request for the process and blocks the process if the resource is already allocated to another process. If the resource is free and the current priority of the resource is less than or equal to the priority of the

Page 5: RTOS

requesting process, the resource is allocated to the process, otherwise, the request is denied and the process is blocked.

According to the previous rules, at any instant, T, if the priority of process, P, is higher than the current highest priority, the process does not require any resource in use at that instant. In addition, processes with priorities that are equal to or higher than P do not require the resource. These rules avoid a deadlock, because process, P, does not request any resource that is in use and no held-up process can inherit a priority that is higher than that of P.

CPP

CPP, also called Stack Based Priority Ceiling Protocol (SBPCP), executes processes at assigned priorities when processes do not access any resources. Processes are executed at the highest ceiling priority, the highest priority of the process by using the resource.

CPP solves resource-allocation problems according to the following rules:

Scheduling: Prevents a process from executing unless the priority assigned to it is higher than the current ceiling of the system. Unblocked processes are scheduled according to assigned priorities.

Allocation: Allots the resource to a process whenever requested.

Updating the current ceiling: Revises the ceiling whenever a resource is allocated or freed.

According to the previous rules, when execution starts, the chances of blocking a process decreases and deadlocks never occur.

Memory Management in RTOS An OS requires various memory-management modules to meet the requirements of all processes that are running.

Table 1-1-1 describes the various functions that you use to access memory in a real-time system:

Table 1-1-1: Memory Access Functions

Function Description

Getbuf() Allocates the memory buffer from the memory pool and returns the pointer to the allocated buffer. If no memory buffers are available, it blocks the task that made the request.

Page 6: RTOS

Table 1-1-1: Memory Access Functions

Function Description

Reqbuf() Allocates the memory buffer from the memory pool and returns the pointer to the allocated buffer. If no memory buffers are available, the function returns the NULL pointer.

Relbuf() Frees the memory buffer that is allocated to the process.

When you load a process into memory, the system divides the code and data parts of the operating system into various memory segments. Each memory segment stores the corresponding data.

The various segments into which memory space is divided are:

Command line arguments: Store the command line arguments, if any, that are passed to the program.

Stack: Stores the context when the running process branches to another process.

Shared memory: Various programs in a process can access the required content. Shared memory is updated accordingly.

Heap: Stores data that has greater lifetime than local variables.

Uninitialized data: Stores uninitialized data variables that the process requires.

Initialized data: Stores initialized data variables in the process.

Figure 1-1-4 shows the general memory model of a process with various memory segments:

Page 7: RTOS

Figure 1-1-4: Memory Model of a Process

Partitioned Memory Management

Partitioned memory is a memory-allocation technique that implements multiprogramming in a real-time system. Memory is divided into various memory partitions. Whenever a process requires memory to execute, the OS checks for the available memory space. The system partitions the available memory space into smaller memory fragments and allocates them to processes. This memory-management technique is implemented in the IBM SYSTEM/360 OS and the IBM SYSTEM/370 OS.

To allocate memory:

1. The RTOS receives the request.2. The RTOS checks for available memory space.

3. If no free memory space is available, the RTOS generates the corresponding handler to hold the process until other processes complete and free up the memory.

4. If free memory space is available, the RTOS compares the amount of required memory space with the amount of free available memory space.

5. If the required memory space is more than the available memory space, the RTOS allocates the space and updates the amount of free memory space.

Page 8: RTOS

6. If the required memory space is equal to the available memory space, the RTOS allocates the space and marks free memory space to empty, if no memory is presently available for any other processes.

7. If the required memory space is less than the available memory space, the RTOS searches for any other free memory area by repeating the previous steps.

When the process execution stops, the process returns the memory to the RTOS, which is called deallocation of memory.

To deallocate memory:

1. The RTOS receives the request.2. The RTOS makes empty the status of the allocated partition of memory.

3. The RTOS checks for free areas that are adjacent to the deallocated partition of memory.

4. If the RTOS does not find any free area, it marks the deallocated area as free.

5. If the RTOS finds a free area that is adjacent to the deallocated area, it merges the two and marks the final area as free.

Figure 1-1-5 shows the memory allocation and memory deallocation algorithms in partitioned memory management:

Figure 1-1-5: Partitioned Memory Management

The Onion Skin Swapping Algorithm

The Onion Skin Swapping Algorithm (OSSA) is the method of memory management that the Compatible Time Sharing System (CTSS) allocates. The OSSA allocates the required amount of memory to a process. The OSSA initially allocates memory according to the

Page 9: RTOS

program size, but it can allocate additional memory to the process during execution. At any time, the process can use the memory up to a maximum limit. Whenever another process starts execution, programs and data that belong to the current process must be swapped out. Swapping out makes memory available to the new process from the memory that is already allocated to the current process. To reduce swapping overhead, only enough memory to load the new process is swapped out.

Figure 1-1-6 shows how to implement OSSA:

Figure 1-1-6: OSSA in Action

This figure contains five memory blocks, shown as rectangles, 1, 2, 3, 4, and 5. Rectangle 1 contains Process A. Rectangle 2 shows the entry of process B that swapped all of Process A outside memory space. When Process C enters the memory in rectangle 3, it swaps out only a part of Process B's memory. The entry of Process D in rectangle 4 swaps out Process C completely, which is part of Process B. Rectangle 5 shows the memory status when all the processes complete their execution and only Process B remains in memory space.

Demand Paged Memory Management

RTOS, such as Multiplexed Information and Computing Service (MULTICS), uses the Demand Paged Memory Management (DPMM) technique to support the page-segmented algorithm. In DPMM, a paging drum holds the pages in the user address space, outside the main memory. User files are held by moving arm disks. MULTICS uses a storage hierarchy that implements high-cost, high-performance, and low-cost, low-performance devices to achieve maximum performance.

Figure 1-1-7 shows the storage hierarchy that the MULTICS RTOS implements:

Page 10: RTOS

Figure 1-1-7: The Storage Hierarchy

To decide the memory hierarchy for DPMM, an OS implements the following rules:

1. The system moves the page to the memory block from the drum or the disk when a page fault occurs. A page fault is a condition in which a requested page is not available in the main memory.

2. If the memory block is already full, a memory page is moved from the memory block to the drum. If the drum already contains the copy of a page, it is overwritten.

3. If the drum is also full, the system moves the page from the drum to disks to create free space for the page in the memory block.

The Least Recently Used (LRU) algorithm moves pages from the memory block and the drum. The algorithm removes a memory page based on the time when any process last accessed the page.

Scheduling in RTOS Schedulingis the process of allocation and deallocation of resources and memory to processes. A scheduler allocates resources to tasks and processes. You must have proper scheduling to maximize resource and memory usage without access conflicts.

Figure 1-1-8 shows the scheduler for an RTOS:

Page 11: RTOS

Figure 1-1-8: RTOS Scheduler

Real-Time Scheduling

Various algorithms such as time-driven and round-robin scheduling, implement real-time scheduling.

Time-Driven Scheduling

The time-driven scheduling algorithm depends on time. The algorithm decides the allocation of resources to processes before the execution of the first process starts. Resources are allocated to a process for the predetermined time interval and other processes, after the time limit expires.

The scheduler implements a hardware timer that allocates and deallocates the resources at periodic time intervals. You must feed the timing schedule that the RTOS decides into the hardware timer.

Round Robin Scheduling

You can use the round-robin schedulingalgorithm for time-sharing applications that the RTOS queues in a First In First Out (FIFO) queue. Each application is allocated a time slot. If the process is not completed within the time slot, it is placed at the end of the queue to wait for the next turn. If the queue contains n number of processes, each gets 1/nth share of the processor time.

Note Another scheduling technique is the weighted round-robin scheduling that assigns a weight to each process. The weight that is assigned to a process determines the duration of processor that is allocated to a process. The length of the round or the

Page 12: RTOS

sum of time slices is the sum of the weights of all the ready processes.

Scheduling Periodic Tasks

A periodic task is the set of processes that has the same parameters, such as execution time and the period of processes. You can schedule periodic tasks through priority-driven algorithms. This scheduling approach allocates priorities to processes. The major advantage is that resources are never left idle. Whenever a resource becomes free, processes with the highest priority are selected and the requisite resource is allocated to them. This approach is also known as greedy scheduling because the algorithm makes optimal decisions locally.

Scheduling Aperiodic Tasks

An aperiodic taskis the set of processes that has different sets of parameters every time you execute them. Algorithms for scheduling these tasks try to complete these processes as early as possible to avoid missing a deadline of periodic processes. The algorithm that schedules aperiodic tasks is called the bandwidth-preserving algorithm. According to this algorithm, aperiodic processes are executed on more than one server. The servers emulate many periodic tasks and are easy to implement.

Task Assignment Algorithms

A task is a set of related processes. Task assignment is the process of selecting processors to execute every task. If the processes are further subdivided into submodules, you must select a processor for every subtask. Task assignment problems occur for various reasons, such as the cost of communications, the placement of resources, the communications costs and costs of communication and resource access.

For allocation of tasks, you can apply various algorithms, such as the:

Rate Monotonic First Fit (RMFF) Rate Monotonic Small Task (RMST)

Rate Monotonic General Task (RMGT)

The RMFF Algorithm

The RMFFalgorithm sorts tasks in a nondecreasing order of periods. Tasks are assigned to the processor so that the total utilization of the processor through the tasks is less than or equal to the schedulable utilization.

The ratio between the number of processors that an assignment requires and the RMFF algorithm produces to the number of processors ranges from 2.0 to 2.23.

Page 13: RTOS

The RMST Algorithm

You can apply the RMST algorithm to assign tasks to the processor rate monotonically. Tasks are sorted in a nondecreasing order for the first time, according to Xi parameters.

The Xi parameter for a task is calculated as:

Xi=log2pi- log2pi

Here, Pi is the period of task i.

You can assign tasks to the processor by using first fit allocation. The schedulable utilization of the RMST algorithm allows maximum utilization of tasks.

You can calculate it as:

URMST=(m-2)(I-umax)+1-ln2

Here:

URMST is the schedulable utilization of the RMST algorithm. umax is the maximum number of utilizations of periodic tasks in the system.

m is the number of processors in the system.

The RMGT Algorithm

The RMGT algorithmdivides all periodic tasks into two subsets according to processor utilization. It creates one subset with tasks of utilization that are less than or equal to 1/3. The RMST algorithm first assigns this subset of tasks to processors. Tasks with utilization that is greater than 1/3 are then assigned to processors on the first-fit basis. The processors already have some tasks that are assigned to them.

The maximum utilization of tasks in case of the RMGT algorithm is calculated as:

URMGT=0.5(m-5/2(ln2)+ 1/3)=0.5(m-1.4)

Here, URMGT is the total utilization of the RMGT algorithm and m is the number of processors in the system.

Understanding Inter-Process Communication in RTOS

Page 14: RTOS

In a multitasking environment, several processes cooperate to execute an application. This cooperation requires communication between independent processes. The Inter-Process Communication (IPC) mechanisms provide this communication. An RTOS provides many IPC mechanisms, such as semaphores, message queues, and pipes.

Note The IPC is also called Inter-Task Communication (ITC).

Shared Memory

Shared memoryis the memory region that many processes access for fast addition and retrieval of data. A process accesses shared memory in the same manner as it accesses normal memory.

When you create shared memory, you must generate a unique key to identify it. Any process can access shared memory by referring to its key.

Table 1-1-2 describes the various functions that access shared memory:

Table 1-1-2: Shared Memory Access Functions

Function Description

Shmget() Creates a shared memory segment and returns a unique key.

Shmat() Returns the pointer to access the shared memory segment for the associated key value.

Shmdt() Detaches the process from a shared memory segment.

Shmctl() Provides the control features of shared memory.

Shared memory is useful in applications where the volume of data to be transferred is high. For effective use of shared memory, you must implement the locking feature to maintain data integrity.

To implement the locking feature on shared memory, the process:

1. Executes the code that is written outside the shared memory.2. Makes a request for a lock for shared memory.

3. Accesses shared memory.

4. Releases the lock for the shared memory.

Page 15: RTOS

5. Continues its execution of the code that is written outside the shared memory.

Figure 1-1-9 shows the IPC mechanism through shared memory:

Figure 1-1-9: IPC Using Shared Memory

Semaphores

A semaphore is a form of IPC, which ensures that only one process can access shared memory or shared resources. It provides the locking operation on a memory segment or resource. Almost all RTOSs use semaphores to keep a process waiting for a memory segment or resource, and to signal the process when the memory segment or resource is free. A semaphore also provides synchronization between processes.

Table 1-1-3 describes the various functions that enable processes to access semaphores:

Table 1-1-3: Semaphore Access Functions

Function Description

Semget() Creates the semaphore and returns its ID.

Semop() Performs various operations, such as lock and unlock.

Semctl() Provides the control features of a semaphore.

To implement semaphores:

1. Initialize semaphore structures.2. Create and initialize the semaphore through functions.

Page 16: RTOS

3. Attach the semaphore to shared memory.

4. Perform the operations on shared memory.

5. Destroy the semaphore.

6. Destroy the shared memory segment.

Semaphores are implemented differently to perform various functions. The various implementations are:

Mutex semaphore or mutex: Handle the priority inversion problem automatically. Resource semaphores or resources: Perform data sharing. If more than one

process uses the data, resource semaphores maintain data integrity.

Counting semaphore: Checks the number of processes by using the resource. Processes can use counting semaphores multiple times. When the semaphore value reaches zero (0), the resource is blocked.

Message Queues

An RTOS implements and controls message queues for IPC. After creating a message queue, a process can read a message from and place a message in the queue. Message queues are slower than shared memory.

Table 1-1-4 describes the various functions that enable processes to access message queues:

Table 1-1-4: Message Queue Access Functions

Function Description

Msgget() Opens an already existing message queue.

Msgctl() Performs various operations on the message queue.

Msgsnd() Adds messages to the message queue.

Msgrcv() Retrieves messages from the message queue.

Pipes

Pipes are a half-duplex FIFO method of IPC. The flow of information in a pipe is unidirectional. Several processes may write to and read from a pipe. You can use pipes

Page 17: RTOS

between related processes that share a common ancestor, but not for data transfer between independent processes.

Table 1-1-5 describes various functions that processes use to access pipes:

Table 1-1-5: Pipe Access Functions

Function Description

Pipe() Creates a communication pipe and returns two identifiers.

Popen() Opens a communication pipe whose identifier is passed as an argument.

Pclose() Closes a communication pipe whose identifier is passed as an argument.

The IPC pipe can use normal file functions, such as fread and fwrite to read from and write to a pipe.

Figure 1-1-10 shows IPC through a pipe:

Figure 1-1-10: IPC through a Pipe

FIFO

FIFO is called a named pipe. FIFO provides an additional advantage, because you can use it for IPC between unrelated processes. Communicating processes may or may not have a common ancestor.

Understanding Security in RTOS Often you must connect your real-time system to a network. For example, your cellular phone is a real-time system that is connected to a Global System for Mobile (GSM) network. A connected real-time system receives instructions and control parameters from the network. In this situation, an intruder can introduce improper instructions and inappropriate parameters to invade system security

Page 18: RTOS

Figure 1-1-11 shows a scenario where an RTOS is attacked by creating a duplicate handle:

Figure 1-1-11: RTOS Intrusion

Protecting Memory

You must protect memory to prevent the flow of information into one process from another unauthorized process. An RTOS follows various techniques to protect its memory resources. According to one such technique, each process in the system is allocated its own memory heap and further allocation of memory is stopped. This method, prevents memory exhaustion through other processes.

Another technique to save the memory contents is the use of the Store_Encrypted and Read_Encrypted commands along with an encryption key that is unique to a process. This technique is useful while dealing with sensitive data. Processes that communicate with memory use this technique to prevent an attack from an outside program.

You can also protect memory by partitioning memory into various segments by using the Memory Management Unit (MMU) hardware. This hardware ensures that corresponding processes use the memory areas.

Another technique to protect memory is by allocating a fixed size of memory to each process. This technique is called hard currency. If a bug attempts to enter a particular address space, the memory budget of the address space, which is the amount of memory

Page 19: RTOS

that is allocated to the process, is exhausted and the bug can request no further RTOS services.

The attack on memory is shown in Figure 1-1-12:

Figure 1-1-12: Memory Protection

Device Security

The MMU facilitates proper functioning of a device that uses an RTOS. The MMU is an integral part of most 32- and 64-bit processors. The MMU associates certain pages in virtual memory with a program or a task. A task can access only the physical memory that is mapped to its virtual address. The MMU prevents the use of the address space that another process allocates to a process.

The MMU also decides the debugging time for a process. More critical processes get more debugging time.

The MMU ensures system security because it puts different programs into different address spaces, isolating each. The MMU provides a hardware mechanism that can establish multiple address spaces and detect a program's attempts to read, write, or execute outside of its assigned address space.

A process uses memory to store various instructions and data. Unavailability of memory may cause the device to malfunction. The device may not be able to store the messages that it requires to complete an operation.

The time limit that is assigned to each process for usage of resources prevents it from consuming the resources beyond the assigned limit. In a multiple address space model, each task can access only its own address space.

Page 20: RTOS

The device protection mechanism is shown in Figure 1-1-13:

Figure 1-1-13: Device Protection

Point 2: Introducing the VxWorks Real Time Operating SystemAbhiram Mishra

VxWorks, developed by Wind River Systems, is a Real Time Operating System (RTOS) used to develop real-time applications for embedded systems.

VxWorks is available for Windows and UNIX platforms. It provides a Graphical User Interface (GUI) for application development and uses the features of the host operating system, such as Windows and UNIX, to perform non-critical tasks. It provides high performance for critical tasks, such as scheduling, memory allocation, interrupt handling, and timer implementation.

This ReferencePoint discusses the VxWorks RTOS. It also discusses the features of VxWorks used in scheduling, intertask communication, and networking.

The VxWorks ArchitectureThe VxWorks RTOS runs on the client-server architecture. The main components of the VxWorks architecture are:

Microkernel: Used to limit the kernel size. It is necessary to limit the kernel size because of the memory constraints in embedded systems.Note 

The microkernel is also known as wind. VxWorks libraries: Provide the functions necessary to use the kernel services.

Modules: Enable VxWorks to accommodate simple and complex embedded systems. Modules, also known as Board Support Packages (BSPs), are available as device drivers. They can be added to the architecture when required.

Figure 1-2-1 depicts the VxWorks architecture:

Page 21: RTOS

Figure 1-2-1: VxWorks Architecture

In the VxWorks architecture, a BSP connects a device with VxWorks and provides a set of Application Programming Interface (API) functions to communicate with a specific device. The wind microkernel communicates with hardware devices when requested by the application.

The wind microkernel is optimized for fast context switches between tasks. It provides many intertask mechanisms to support communication between the independent tasks of an application. The wind microkernel shares its memory space with application tasks.

Although the wind microkernel does not support virtual memory, you can enable support by using an optional component known as VxVMI. Virtual memory is required so that a misconfigured application task does not overwrite data in the system address space causing the application to crash.

Scheduling in VxWorksEach independent executing program in a real time application is known as a task. All tasks in VxWorks share the same memory space but have different threads of control.

Task States

A task passes through many states from the start to the end of its execution. These states determine whether the task is currently active, is waiting for resources, or has terminated.

The kernel maintains the state of each executing task. VxWorks defines four task states. Table 1-2-1 describes the task states in VxWorks:

Page 22: RTOS

Table 1-2-1: Task States in VxWorks

Task State Description

READY Implies that the task is ready to run but does not currently have the processor resource.

PEND Implies that the task is blocked and is waiting for a resource.

DELAY Implies that the task has made a call to sleep for some time.

SUSPEND Implies that the task is not available for execution.

The scheduler manages the sequence of tasks according to a scheduling algorithm. The scheduler switches the tasks and moves them between states.

A task can call library functions to suspend its execution, resume its execution, and sleep for a specified duration. This can lead to a task state transition, which may occur because of events such as interrupts, unavailability of resources, or calls to VxWorks library functions. Table 1-2-2 describes the state transitions and the events that cause these transitions:

Table 1-2-2: State Transitions

Old State New State Event Description

READY PEND semTake, msgQReceive

Occurs either when a semaphore is not available or when a message queue is empty.

READY DELAY taskDelay Occurs when a ready task calls the taskDelay function.

READY SUSPEND taskSuspend Occurs when a ready task calls the taskSuspend function.

PEND READY semGive, msgQSend

Occurs either when the semaphore required by the task is released or when a message is sent by another task to an empty message queue.

PEND SUSPEND taskSuspend Occurs when a pended task calls the taskSuspend function.

Page 23: RTOS

Table 1-2-2: State Transitions

Old State New State Event Description

DELAY READY Expired delay Occurs when the delay specified in the taskDelay function expires.

DELAY SUSPEND taskSuspend Occurs when a delayed task calls the taskSuspend function.

SUSPEND READY taskResume, taskActivate

Occurs when a suspended task calls the taskResume or taskActivate function and the task is ready to run.

SUSPEND PEND taskResume Occurs when a suspended task calls the taskResume function but the task requires a resource that is unavailable at that time.

SUSPEND DELAY taskDelay Occurs when a suspended task calls the taskDelay function.

When a task is created using the taskInit function, it first enters the SUSPEND state. The task is activated using the taskActivate function. The taskSpawn function creates a function using taskInit and immediately calls taskActivate to set the task state to READY.

Context Switch

The wind microkernel maintains information about a task in a Task Control Block (TCB). Each task has its own TCB. The information stored in the TCB is known as the context of a task. When the scheduler decides to switch the currently executing task, a context switch occurs. When a context switch occurs, the task context is saved in the TCB. As a result, when the task is scheduled to run again, it begins execution from the same point where it was switched out.

The TCB stores the following information about the task context:

Program counter Processor register values

Task stack

Task input and output stream assignments

Page 24: RTOS

Delay timer

Time slice timer

Kernel control structures

Signal handlers

Debugging and statistical information

Scheduling Algorithms

The scheduler that is part of the wind microkernel switches tasks between the running state and the other states depending on a scheduling algorithm. The default scheduling algorithm follows Pre-emptive priority scheduling. Alternatively, you can use the round robin scheduling algorithm by calling the kernelTimeSlice library function. Both these algorithms perform context switches between tasks.

Pre-emptive Priority Scheduling

The wind microkernel scheduler uses the Pre-emptive priority algorithm as the default algorithm. In this algorithm, each task is assigned a priority and the scheduler selects the task with the highest priority for execution. This algorithm is Pre-emptive. As a result, if a task that has a higher priority than the currently executing task arrives, the currently executing task is switched out, and a new task begins executing.

Each task is assigned a priority level when it is created. It is possible to change this priority level during program execution using the taskPrioritySet library function. VxWorks defines 256 priority levels, ranging from 0 to 255. Priority level 0 and 255 are the highest and lowest priority levels, respectively.

Round Robin Scheduling

The round robin scheduling algorithm allocates an equal processor time for a group of tasks with the same priority level. Each task is allotted a time slice, which is the duration for which it can use the processor before a context switch occurs. The scheduler maintains a queue of all tasks. The task at the head of the queue is selected to run. When a task is switched out, it is placed at the end of the queue and its time slice value is cleared.

The TCB for each task stores the time slice value for the task. This value indicates the time left before the task is switched out. The kernel updates the counter and calls the scheduler to perform a context switch when the time slice expires.

Page 25: RTOS

The round robin scheduling algorithm is also Pre-emptive. If a higher priority task arrives, the currently executing task is switched out and its time slice value is saved. The task then resumes execution with the same time slice value.