You are currently browsing the archives for the RTOS tag.

Debunking priority

June 28th, 2013

Before I start, a disclaimer:

For the purposes of this article I’m limiting the discussion to even-driven systems on priority-based, pre-emptive operating systems, on single processors.

I’m using the word task to mean ‘unit of execution’ or ‘unit of schedulability’, in preference to thread or process. I’m ignoring the different OS memory models.

There seems to be a fair amount of misunderstanding about the concept of priority in concurrent programming:

  • Priority means one task is more ’important’ than another.
  • Priority allows one task to pre-empt another (True, but why is this important?)
  • Priority means one task runs more often than another.
  • Changing priorities will fix race conditions between tasks.

Here are some other viewpoints on the subject.

(http://www.codinghorror.com/blog/2006/08/thread-priorities-are-evil.html)

(http://stackoverflow.com/questions/95649/when-should-i-consider-changing-thread-priority)

The general consensus seems to be that arbitrarily adjusting a task’s priority is ‘bad’. However, there’s not a lot of useful concrete information on why you should adjust a task’s priority.

Task priority should be thought of as a measure of the ‘determinism of latency of response’ for that task. That is, the higher the priority of a task (relative to its peers) the more predictable (deterministic) its latency of response is. 
To understand this let’s consider an example system with a number of tasks, each with a different priority. All of the tasks may pend on some shared (protected) resource or event. 
In scenario 1, only the highest priority task is available to run. When it pends on the resource/event it gets it ‘immediately’ – that is, with the minimum possible latency (there will always be some processing overhead) 
In scenario 2, only the lowest priority task is available to run. When it pends on the resource/event it also gains access with the smallest possible delay. In other words, in this scenario its latency of response is exactly the same as the highest priority task! 
However, in scenario 3, things change. In this scenario let’s have all our tasks available to run and attempt to access/respond to the resource/event. In this case, the highest priority task (by definition) gets access first. It’s latency of response is the same (give or take) as when there are no other tasks running. That is, it has the most predictable latency (it’s almost constant). 
However, the lowest priority task must wait until all other pending tasks have finished. Its latency is: minimum + task1 processing time + task2 processing time + task3 processing time +… 
So, for the low priority task the latency is anywhere from the minimum up to some (possibly unpredictable) maximum. In fact, if we’re not careful, our highest priority task may be ready to access again before the lowest priority task has even had its first access – so-called ‘task starvation’.

image

A task’s priority will affect its worst-case latency – the higher the priority the more predictable the latency becomes.


If all your tasks run at the same priority you effectively have no priority.  Most pre-emptive kernels will typically have algorithms such as time-slicing between equal-priority tasks to ensure every task gets a ‘fair share’.

So, why might I want to adjust my tasks’ priorities? Let’s take a common embedded system example: a pipe-and-filter processing ‘chain’.

The basic premise has a task pending on input events/signals from the environment. These are passed through a chain of filter tasks, via buffer ‘pipes’. The pipes are there to cope with the differences of processing speed of each filter task and the (quite likely) ‘bursty’ nature of event arrival.

In a system with fast-arriving, or very transient events, we may wish to increase the priority of front-end of the filter chain to avoid losing events.

Increasing the priority of the back-end of the filter chain favours throughput over event detection.

In each case the pipes must be sized to accommodate the amount of data being stored between filters. Ideally, we want to avoid the buffers becoming flooded (in which case the filter chain runs at the speed of the slowest filter)

image

Adjusting task priorities to achieve system performance requirements

However, all is not necessarily rosy. Your careful-tuned system can be disrupted by introducing (either explicitly or through some third-party or library code you have no control of) code with its own tasks.

image

Introducing new code (explicitly or implicitly) can disrupt system performance

The introduction of (in this case) another medium-priority task may slew the latency predictability of our original medium-priority task. For example, what happens if the new task runs for a significant period of time? It cannot be pre-empted by our filter task. If we are unlucky (and we so often are!) this can cause our system to stop meeting its performance requirements – even though there is no change in the original code! (I’ll leave it as an exercise for the reader to consider the impact of this for reusable code…)

Finally, a huge caveat for multi-processor systems: Priority only has meaning if the number of tasks exceeds the number of processors. Consider the extreme case where each task has its own processor. Each task, being the only task waiting to execute, will execute all the time. Therefore it is always at the highest priority (on that processor)

If your design assigns multiple tasks to multiple processors then you must appreciate (and account for) the fact that priorities only have meaning on each individual processor. Priority no longer becomes a system-wide determinant.

Embedded Systems Conference – Silicon Valley

April 22nd, 2010

As I’m sure you’re well aware, the ash cloud from the Eyjafjallajokull volcano (which began erupting in mid-March) pretty much brought much of European airspace to a standstill over last weekend and into this week. Now that UK airspace has been reopened, it appears I can resume plans for my visit to the Embedded Systems Conference in San Jose next week.

The sessions I’m directly involved with are:

Examining ARM’s Cortex Microcontroller Software Interface Standard – CMSIS
Date/Time: Tuesday (April 27, 2010)   3:15pm — 4:15pm
In this session I shall be examining ARM’s Cortex Microcontroller Software Interface Standard (CMSIS -  typically pronounced C-M-Sis) . CMSIS aims to provide a framework of reusable components for software developers, silicon and compiler vendors to utilize. Aspects such a power-up boot management and interrupt control have been abstracted and defined by ARM for their Cortex-M family of microcontroller cores. I’ll spend time explaining the CMSIS framework, examining the code from a compiler and chip perspective, and finish off by discussing the pros and cons I see in such an approach.

Understanding Mutual Exclusion: The Semaphores vs. the Mutex
Date/Time: Wednesday (April 28, 2010)   12:00pm — 12:50pm
Location (room): ESC Theater 2
This presentation is based around much of the material in some of my previous postings (see RTOS related blog postings ). One of the things I like about ESC events is that some of the sessions are free. So if you happen to be attending just the exhibition, then you can still attend this session.

The State of Embedded
Date/Time: Wednesday (April 28, 2010)   4:00pm — 4:50pm
Location (room): ESC Theater 1
Finally, I have been invited to participate in an informal session along with Jack Ganssle and Dan Saks (moderated by Rich Nass of Embedded.com and EE Times) to discuss some of the latest trends in embedded systems. We did a similar session at ESC UK back October 2009, which was very well received. It shall be interested to see how being the token European with a US audience differs from the UK event. This is also a free session.

If you’re at the event look me up and say hello.

Task Synchronisation – Part 2: Multiple Tasks and RTOS APIs

November 16th, 2009
First off, apologies for the delay in this follow up to the previous post Task Synchronisation, it has been a mad couple off weeks with a combination of vacation and work.
In the previous post I looked at the foundation of task synchronization demonstrating there are  range of synchronisation models (bilateral/unilateral, persistent/non-persistent, etc.). In this post I shall look at multi-task synchronisation and then investigate specific RTOS APIs.
MULTIPLE TASKS WAITING
So far we have only dealt with the simple case of two task synchronisation. We now address the case where multiple tasks are waiting at the synchronisation point:

We now have two tasks, Task2 and Task3 blocked waiting on the SO. Task1 now reaches the synchronisation point and signals the event. Again we could have the case of bilateral and unilateral sync, though typically Task1 doesn’t block if no other task is waiting (unilateral model).
The first question is: how many of the waiting tasks synchronise? All the waiting tasks, or only the first one? (And how do we define which task is the first one?).
In many systems we need the ability to broadcast a condition to all waiting tasks (defined as one-to-many or 1:M synchronisation). This is often referred to as a “Barrier”. Barriers many be non-persistent (only those waiting at that time get readied; subsequent tasks block), or persistent (all tasks continue past sync point until the synchronisation is removed with a manual reset).
Alternatively, only one waiting task is allowed past the synchronisation point, the rest remaining blocked. With single task synchronisation there must be a policy regarding which task is synchronised. For a real-time system this is normally based on ordering the waiting queue by task priority. However, as matter of choice, most RTOSs will support queuing based on arrival order (FIFO).
MULTIPLE CONDITION SYNCHRONISATION
In the design of real-time systems it is common for a task to synchronise on a number of different conditions. This many involve a conjunction (AND) or disjunction (OR) of events. For example, a motor fault condition may be specified as:
  • low oil pressure AND > 30sec after start-up
Or a gas system nanoautomate alarm condition as:
  • Isobutane present in outlet line OR
  • Isobutane present in front flush OR
  • Isobutane present in rear flush
RTOS SYNCHRONISATION OBJECTS
Given the array of synchronisation requirements and options how do modern RTOS support synchronisation? The majority of RTOSs support the following:
  • Event Flags / Event Flag Groups
  • Semaphores as Signals
The Semaphore as a Signal is not to be confused with C standard library signals or UNIX signals. A signal is, according to the UNIX terminology, a notification sent to a process that will interrupt that process. In order to catch a signal a process must have a signal handler. This is similar to the behaviour of a software interrupts.

EVENT FLAGS
Typically an Event Flag is implemented as a Manual-reset, Persistent, Unilateral synchronisation object. It is by far the simplest idea and mechanism, and is best suited to one-to-many (1:M) or many-to-one (1:M) synchronisation. An API may consist of the following calls:

  • Set – sets the flag, readying any waiting tasks; can be called from an ISR
  • Clear – clears the flag, if the flag is cleared then any arriving task is blocked
  • Wait – non-consuming pend on a flag
However, event flags are normally bound together to form Event Flag Groups (EFG). Having support for groups of flags allows task the wait either conjunctively or disjunctively . A typical implementation will bind an event flag group to a word in memory (e.g. 32 bits = 32 flags). To support groups, the API is extended to include a further argument, and the wait includes the specification of conjunctive or disjunctive synchronisation.
  • Set(group, bitmask)
  • Clear(group, bitmask)
  • Wait(group, bitmask, AND_OR, timeout)

RTOSs differ in the implementation of Event Flag Groups, with some only supporting M:1 synchronisation and not supporting 1:M or M:M synchronisation. In this case each event flag group is bound to a specific task (i.e. EFGs cannot stand as independent resources), altering the API to:

  • Set(task_id, bitmask)
  • Clear(task_id, bitmask)
  • Wait( bitmask, AND_OR, timeout, &events;)
ISSUES WITH EVENT FLAGS

A surprising number of RTOSs do not support the concept of Event Flags, thus no support for any form of disjunctive or conjunctive synchronisation. The usual argument for not supporting them is that it can be difficult (if not impossible) to make timing deterministic (especially disjunction). Timing typically takes an O(N) form where N is the number of waiting tasks.

In addition, I am not aware of any that support event flag groups being able to do combination logic; e.g. A or B or (C and D).

Some examples of event flag support from commercial RTOSs are:

  • >VxWorks

    • 32 events in an event field
    • Each task has its own events field that can be filled by having tasks and/or ISRs sending events to the task.
  • ThreadX
    • 32 event flags in a group
    • Each event flag group is a public resource
  • Nucleus PLUS
    • 32 event flags in a group
    • Each event flag group is a public resource
  • uC/OS-III
    • 8, 16 or 32 event flags in a group (compile time configured)
    • Each event flag group is a public resource
SEMAPHORE AS SIGNALS
The generic concept of a signal is for synchronisation between two tasks, with a simple API of:
  • signal
  • wait – timeout option
However, the term ‘signal’ is now more commonly used in the context of C and UNIX programming, where it refers to an asynchronous communication mechanism.
Most RTOSs, then, do not support the concept of a signal directly, instead directing the programmer to use the semaphore as a synchronisation object (the Semaphore as a Signal pattern). When using a semaphore as a signal, the SO takes the form of an Auto-reset Persistent Unilateral synchronisation object.
The semaphore was originally designed for support mutual exclusion in multi-tasking systems and pretty much all modern RTOSs support the semaphore.This is thoroughly covered in a previous posting (Mutex vs. Semaphore).
The fact that semaphores do not check the releaser of the semaphore was actually the taker allows the semaphore to be used for unidirectional synchronization. This works by creating the semaphore with a count of zero. When the semaphore count is zero, any task waiting (‘pending’ in our example) will block. When a signal is sent (SemPost), the count is incremented by one. If a task is waiting, it will consume the count token and continue (decrementing the count back to zero). If there is more than one task waiting, then only the first will be signalled – the others remaining blocked (either by priority or FIFO ordering).
Note that the Pend and Post calls are not used as a pair in the same task. In the example, assuming Thread1 calls the OSSemPend function it will block. When Thread2 later calls the OSSemPost function then the unilateral synchronization takes place and both task are ready to run (with the higher priority task actually running).

The Semaphore as a Signal pattern is regularly used to synchronise tasks with ISRs triggered by external events.  This mechanism is favoured since the ISR will never block when posting to the semaphore (thus avoiding the potential to ‘stall’ the system).

An RTOS may support counting semaphores (where the count increments each time it is signalled) and/or binary semaphores (where the count is either zero or one – signalled or unsignalled).  The choice of semaphore can have important implications for the behaviour of the application.

If we have sporadic interrupts, then the ISR may signal the semaphore multiple times before the task waits on it. Does the application need to know only that the event has occurred, or does it need to know the actual number of times an event has occurred? Either is valid, depending on the system requirements. Note that most RTOSs use the counting semaphore as a signal and thus will count the number of events.

Unfortunately using the semaphore as synchronization primitive can be problematic in that it makes debugging harder and increase the potential to miss “accidental release” type problems, as an OSSemPost on its own (i.e. not paired with a OSSemPend) is considered legal code.

As an example, VxWorks does not support signals, but does support both the binary semaphore and the counting semaphore. Either can be used for synchronisation if created EMPTY (0). The following calls can be used by tasks to use the semaphore as a synchronisation object.

  • semGive()
    • Binary – giving a “full” semaphore has no effect
    • Counting – increments count by one
  • semTake()
    • will block if count is zero
  • semFlush()
    • all waiting tasks are unblocked; semaphore left at zero
The semFlush is interesting, in that it allows all waiting tasks past the synchronisation point, thus supporting a 1:M barrier synchronisation.
BILATERAL SYNCHRONIZATION
As already stated, one limitation of bilateral synchronisation (aka the Rendezvous) is that it cannot be used for ISR to task synchronisation . Because of this, bilateral synchronisation is rarely supported by commercial RTOSs. Notable, though, is the ITRON Project, which creates standards for real-time operating systems used in embedded systems (µITRON4.0 Specification). Like the Ada programming language it supports the concept of the Rendezvous for bilateral synchronisation (it actually uses the term Rendezvous Ports).
For those RTOSs that don’t support Bilateral Synchronization, this can be simulated/implemented using a pair of semaphores.

Irrespective of which task reaches the synchronisation point first it will have to wait until the other task to arrives.
MUTUAL EXCLUSION AND SYNCHRONISATION
So far we have considered general synchronisation between two or more tasks. There is one further synchronisation use case we need to examine. Take, for example, the C code for a simple stack shown below. These routines use the variable count to keep track of the stack size. If the stack is empty then pop returns STACK_EMPTY and the caller must check for and take appropriate error handling actions.

Suppose that we do not want to return STACK_EMPTY, but want to wait (synchronise) for the stack to contain data. Since the waiting task owns the mutex no other task will be able to enter the critical region and push an element onto the stack. Thus the waiting task could never be signalled the stack is no longer empty – a deadlock.

A Condition Variable is used in conjunction with mutexes to implement synchronisation for certain conditions to become true (e.g. the stack becoming not empty). A condition variable (also called a condition object), therefore is a unilateral, auto-reset, non-counting, synchronisation object.

SUMMARY
When designing software for embedded systems that is relying on a particular Real-Time Operating System, it is essential to understand that the behaviour of synchronisation primitives differ from RTOS to RTOS. Many questions need answering such as:

  • Does the RTOS support only unilateral synchronisation, or does it include primitives for bilateral synchronisation?
  • If multiple tasks are waiting then how many are readied?
    • If only one, then how is it selected (priority / FIFO)?
  • Is the signal persistent or non-persistent?
  • Is the synchronisation object manual-reset or auto-reset?
  • Does the RTOS support multiple event Conjunction and/or Disjunction?
Finally, where mutual exclusion is required (as in most systems) does the RTOS support the concept of condition objects?
In summary, note that many RTOSs are very weak in terms of supporting different types of synchronisation, most relying on using the counting semaphore as a general synchronisation object.

Task Synchronisation

October 15th, 2009
Synchronisation is an everyday event, both in the real-world and the computer program. For example meeting a friend for a coffee requires synchronisation, in that both parties need to arrive within a given timeframe to make the event worthwhile (sometimes referred to as a rendezvous – however this tends to have more romantic implications). Alternatively, receiving a PO via fax is a form of synchronisation. The company waiting on the PO will-not/cannot start working on the project until this event occurs. Finally, in an automated robot manufacturing system, the movement and work done by each robot must be synchronised with each other and in conjunction with the actual production line.
In the field of embedded systems’ software there are also many requirements for synchronisation with a program. In multi-tasking system using a RTOS examples are:
  • Asynchronous device driver where we are dealing with slow devices. We don’t necessarily want tasks blocked waiting on the device.
  • At system start-up, many RTOSs start tasks as active (ready to run). We may have an ordering dependency for execution (e.g. initialisation of global resources) where all tasks must wait for a given condition (the concept of a barrier which can be very important in multi-processor systems).
  • Having a managed task abort notification, rather than deleting tasks (which can lead to resource issues). Similar in concept to the UNIX/Linux kill signal. Also used to manage task pools.
The definition of synchronisation found on dictionary.com is :
synchronisation
noun
  1. the relation that exists when things occur at the same time; “the drug produces an increased synchrony of the brain waves” [syn: synchronism] [ant: asynchronism] 
  2. an adjustment that causes something to occur or recur in unison [syn: synchronization]
  3. coordinating by causing to indicate the same time; “the synchronization of their watches was an important preliminary” [syn: synchronization]
So synchronisation is about making sure event happen at the same time (as in a common clock in communications systems) as opposed to asynchronous which means not happen at the same time. Unfortunately, as we shall see, most texts related to RTOS descriptions misuse/misunderstand this term.
It should be noted that synchronisation and mutual exclusion often get lumped together and confused. Mutual exclusion is about making sure thing don’t happen at the same time, whereas synchronisation is about making sure they do.

In regard to task synchronisation there four classes of interaction we need to address:

  1. one-to-one – only two task synchronising
  2. one-to-many
  3. many-to-one
  4. many-to-many

Initially we address the condition where only two tasks are synchronising.

ONE-TO-ONE : SINGLE CONDITION
In the simplest case of synchronisation, we have two tasks (Task1 and Task2) that need to synchronise their execution.
  • Task2 runs until it reaches the synchronisation point as defined by an RTOS synchronisation object (SO), at which point it waits for Task1
  • Task1 now runs and reaches the synchronisation point, signals Task2 via the SO. Both tasks are now ready to run.
  • The higher priority task now continues execution and the lower priority task is made ready (If the tasks are of equal priority typically Task1 will continue as this avoids a context switch).
We can now say that Task1 and Task2 have synchronized their threads of execution.

However, what should the behavior be if Task1 arrives first? In terms of the dictionary definition of synchronisation, Task1 should wait for Task2. Unfortunately, with most RTOSs this is not the case and Task1 will continue execution without blocking. This means we need to further refine our definition of synchronisation to include the concepts of:
  • Symmetric or Bilateral Synchronisation
  • Asymmetric or Unilateral Synchronisation
BILATERAL SYNCHRONISATION
Bilateral synchronisation is the condition when whichever task arrives first it waits for the other one. This is often called the Rendezvous (as support by the Ada programming language). Surprisingly this is rarely supported by the majority of RTOSs. One limitation of bilateral synchronisation is that it cannot be used for ISR-to-task synchronisation (as an ISRs must never block).
UNILATERAL SYNCHRONISATION
Unilateral Synchronisation, which feels like a paradox, is the case where:
  • if Task2 arrives at the SO first it will wait for Task1, and then synchronisation with Task1 when it arrives
  • if Task1 arrives at the SO first it will not wait for Task2, thus unilateral synchronisation.

When reading literature associated with RTOSs and synchronisation, this is the most commonly described model (e.g. Li, 2003).
This, however, brings us yet another dilemma; what happens when Task2 now reaches the synchronisation point (assuming Task1 has already passed the synchronisation point)? Does Task2 block and wait for Task1, or does it see that Task1 has already visited to SO and continue? Or to put it another way, is the notification of synchronisation to the synch object from Task1 persistent or non-persistent?
NON-PERSISTENT UNILATERAL SYNCHRONISATION OBJECT
In a non-persistent model, the fact that Task1 has already passed the synchronisation point is not remembered, therefore Task2 blocks until Task1 signals again . Due to how most RTOSs actually support unilateral synchronisation (discussed later), this, like bilateral synchronisation, is also an uncommon model. Interestingly, Win32 supports this model using a concept called a PulseEvent. If no tasks are waiting when the PulseEvent is called then the event is discarded.
PERSISTENT UNILATERAL SYNCHRONISATION OBJECT
The key to this model is that the fact that Task1 has already signalled the SO (passed the synchronisation point) is remembered. When Task2 arrives at the synchronisation point is doesn’t block, but continues (this particular use case is actually asynchronous event notification). 
However, we have yet another dichotomy; does Task2 consumes the signal Task1 has set (auto-reset) or does Task1 clear the signal (manual-reset) at some later time.

In the consuming model, if Task2 now arrives back at the SO and waits, it will block until again signalled by Task1. In the manual-reset model, Task2 will continue to pass the synchronisation point until Task1 (or indeed Task2) explicitly clears the signal (normally with a separate API call).
Finally, in the consuming model what happens if Task1 signals the SO more than once before Task2 arrives at the synchronisation point, and therefore the original signal has not been consumed? One model is there is no effect, the signal remains set and is consumed when Task2 arrives (binary model). The alternative is that a count is kept of the number of signals set by Task1. Each time Task2 waits on the SO this count is decremented, and Task2 will only block if the count is zero.

So we can classify RTOS synchronisation into the following:

In the next posting I shall be looking at synchronisation involving more than two tasks and then following that one by examining some actual RTOSs and their support for synchronisation.

Mutex vs. Semaphores – Part 3 (final part): Mutual Exclusion Problems

October 5th, 2009

As hopefully you can see from the previous posting, the mutex is a significantly safer mechanism to use for implementing mutual exclusion around shared resources. Nevertheless, there are still a couple of problems that use of the mutex (in preference to the semaphore) will not solve. These are:

  • Circular deadlock
  • Non-cooperation

Circular Deadlock
Circular deadlock, often referred to as the “deadly embrace” problem is a condition where two or more tasks develop a circular dependency of mutual exclusion. Simply put, one task is blocked waiting on a mutex owned by another task. That other task is also block waiting on a mutex held by the first task.

So how can this happen? Take as an example a small control system. The system is made up of three tasks, a low priority Control task, a medium priority System Identification (SI) task and a high priority Alarm task. There is an analogue input shared by the Control and the SI tasks, which is protected by a mutex. There is also an analogue output protected by a different mutex.

The Control task waits for mutexes ADC and DAC:

mutex_lock (ADC);
mutex_lock (DAC);
/* critical section */
mutex_unlock (ADC);
mutex_unlock (DAC);

The SI Task waits for mutexes DAC and ADC:

mutex_lock (DAC);
mutex_lock (ADC);
/* critical section */
mutex_unlock (DAC);
mutex_unlock (ADC);

Unfortunately, under certain timing conditions, this can lead to deadlock. In this example the Control task has locked the ADC, but before locking the DAC has been pre-empted by the higher priory SI task. The SI task then locks the DAC and tries to lock the ADC. The SI task is now blocked as the ADC is already owned by the Control task. The Control task now runs and tries to lock the DAC. It is blocked as the DAC is held by the SI task. Neither task can continue until the mutex is unlocked and neither mutex can be unlocked until either task runs – classic deadlock.

For circular deadlock to occur the following conditions must all be true:

  • A thread has exclusive use of resources (Mutual exclusion)
  • A thread can hold on to a resource(s) whilst waiting for another resource (Hold and wait)
  • A circular dependency of thread and resources is set up (Circular waiting)
  • A thread never releases a resource until it is completely finished with it (No resource preemption)

These conditions can be addressed in a number of ways. For example, a design policy may stipulate that if a task needs to lock more than one mutex it must either lock all or none.

Priority Ceiling Protocol

With the Priority Ceiling Protocol (PCP) method each mutex has a defined priority ceiling, set to that of the highest priority task which uses the mutex. Any task using a mutex executes at its own priority – until a second task attempts to acquire the mutex.  At this point it has its priority raised to the ceiling value, preventing suspension and thus eliminating the “hold and wait” condition.

In the deadlock example shown before, the significant point is when the SI task tries to lock the DAC. Before that succeeded and lead to cyclic deadlock. However with a PCP mutex, both the ADC and DAC mutex will have a ceiling priority equal to the SI’s task priority. When the SI task tries to lock the DAC, then the run-time system will detect that the SI’s task priority is not higher than the priority of the locked mutex ADC. The run-time system suspends the SI task without locking the DAC mutex. The control task now inherits the priority of the SI task and resumes execution.

Non-cooperation

The last, but most important aspect of mutual exclusion covered in these ramblings relies on one founding principle: we have to rely on all tasks to access critical regions using mutual exclusion primitives. Unfortunately this is dependent on the design of the software and cannot be detected by the run-time system. This final problem was addressed by Tony Hoare, called the Monitor.

The Monitor

The monitor is a mechanism  not typically supplied by the RTOS, but something the programmer tends to build (a notable exception is Ada95′s protected object mechanism). A monitor simply encapsulates the shared resource and the locking mechanism into a single construct (e.g. a C++ Object that encapsulates the mutex mechanism). Access to the shared resource, then, is through a controlled interface which cannot be bypassed (i.e. the application never explicitly calls the mutex, but calls upon access functions).

Finishing Off…
This goal of these initial postings is to demonstrate that common terms used in the real-time programming community are open to ambiguity and interpretation. Hopefully you should now be clear about the core differences between the Binary Semaphore, General (counting) Semaphore and the Mutex.

The underlying difference between the Semaphores and the Mutex is the Principle of Ownership. Given the principle of ownership a particular implementation of a mutex may support Recursion, Priority inheritance and Death Detection.

ENDNOTE
An aspect of the mutex I haven’t covered here is that many operating systems support the concept of a condition variable. A condition variable allows a task to wait on a synchronization primitive within a critical region. The whole aspect Synchronization Patterns (e.g. semaphore as a signal) within the context of RTOSs will be the subject of my next posting.

Mutex vs. Semaphores – Part 2: The Mutex

September 11th, 2009

In Part 1 of this series we looked at the history of the binary and counting semaphore, and then went on to discuss some of the associated problem areas. In this posting I aim to show how a different RTOS construct, the mutex, may overcome some, if not all, of these weaknesses.

To address the problems associated with semaphore, a new concept was developed during the late 1980’s. I have struggled to find it’s first clear definition, but the major use of the term mutex (another neologism based around MUTual EXclusion) appears to have been driven through the development of the common programming specification for UNIX based systems. In 1990 this was formalised by the IEEE as standard IEEE Std 1003.1 commonly known as POSIX.

The mutex is similar to the principles of the binary semaphore with one significant difference: the principle of ownership. Ownership is the simple concept that when a task locks (acquires) a mutex only it can unlock (release) it. If a task tries to unlock a mutex it hasn’t locked (thus doesn’t own) then an error condition is encountered and, most importantly, the mutex is not unlocked. If the mutual exclusion object doesn’t have ownership then, irrelevant of what it is called, it is not a mutex.

The concept of ownership enables mutex implementations to address the problems discussed in part 1:

  1. Accidental release
  2. Recursive deadlock
  3. Task-Death deadlock
  4. Priority inversion
  5. Semaphore as a signal


Accidental Release
As already stated, ownership stops accidental release of a mutex as a check is made on the release and an error is raised if current task is not owner.

Recursive Deadlock
Due to ownership, a mutex can support relocking of the same mutex by the owning task as long as it is released the same number of times.

Priority Inversion
With ownership this problem can be addressed using one of the following priority inheritance protocols:

  • [Basic] Priority Inheritance Protocol
  • Priority Ceiling Protocol

The Basic Priority Inheritance Protocol enables a low-priority task to inherit a higher-priorities task’s priority if this higher-priority task becomes blocked waiting on a mutex currently owned by the low-priority task. The low priority task can now run and unlock the mutex – at this point it is returned back to its original priority.

The details of the Priority Inheritance Protocol and Priority Ceiling Protocol (a slight variant) will be covered in part 3 of this series.

Death Detection
If a task terminates for any reason, the RTOS can detect if that task current owns a mutex and signal waiting tasks of this condition. In terms of what happens to the waiting tasks, there are various models, but two doiminate:

  • All tasks readied with error condition;
  • Only one task readied; this task is responsible for ensuring integrity of critical region.

When all tasks are readied, these tasks must then assume critical region is in an undefined state. In this model no task currently has ownership of the mutex. The mutex is in an undefined state (and cannot be locked) and must be reinitialized.

When only one task is readied, ownership of the mutex is passed from the terminated task to the readied task. This task is now responsible for ensuring integrity of critical region, and can unlock the mutex as normal.

Mutual Exclusion / Synchronisation
Due to ownership a mutex cannot be used for synchronization due to lock/unlock pairing. This makes the code cleaner by not confusing the issues of mutual exclusion with synchronization.

Caveat
A specific Operating Systems mutex implementation may or may not support the following:

  • Recursion
  • Priority Inheritance
  • Death Detection

Review of some APIs
It should be noted that many Real-Time Operating Systems (or more correctly Real-Time Kernels) do not support the concept of the mutex, only supporting the Counting Semaphore (e.g. MicroC/OS-II). [ CORRECTION: The later versions of uC/OS-II do support the mutex, only the original version did not].

In this section we shall briefly examine three different implementations. I have chosen these as they represent the broad spectum of APIs offered (Footnote 1):

  • VxWorks Version 5.4
  • POSIX Threads (pThreads) – IEEE Std 1003.1, 2004 Edition
  • Microsoft Windows Win32 – Not .NET

VxWorks from Wind River Systems is among the leading commercial Real-Time Operating System used in embedded systems today. POSIX Threads is a widely supported standard, but has become more widely used due to the growth of the use of Embedded Linux. Finally Microsoft Window’s common programming API, Win32 is examined. Windows CE, targeted at embedded development, supports this API.

However, before addressing the APIs in detail we need to introduce the concept of a Release Order Policy. In Dijkstra’s original work the concept of task priorities were not part of the problem domain. Therefore it was assumed that if more than one task was waiting on a held semaphore, when released the next task to acquire the semaphore would be chosen on a First-Come-First-Server (First-In-First-Out; FIFO) policy. However once tasks have priorities, the policy may be:

  • FIFO            – waiting tasks ordered by arrival time
  • Priority        – waiting tasks ordered by priority
  • Undefined    - implementation doesn’t specify

VxWorks v5.4
VxWorks supports the Binary Semaphore, the Counting Semaphore and the Mutex (called the Mutual-Exclusion Semaphore in VxWorks terminology). They all support a common API for acquiring (semTake) and releasing (semGive) the particular semaphore. For all semaphore types, waiting tasks can be queued by priority or FIFO and can have a timeout specified.

The Binary Semaphore has, as expected, no support for recursion or inheritance and the taker and giver do not have to be same task. Some additional points of interest are  that there is no effect of releasing the semaphore again; It can be used as a signal (thus can be created empty); and supports the idea of a broadcast release (wake up all waiting tasks rather than just the first). The Counting Semaphore, as expected, is the same as the Binary Semaphore with ability to define an initial count.

The Mutual-Exclusion Semaphore is the VxWorks mutex. Only the owning task may successfully call semGive. The VxWorks mutex also has the ability to support both priority inheritance (basic priority inheritance protocol) and deletion safety.

POSIX
POSIX is an acronym for Portable Operating System Interface (the X has no meaning). The current POSIX standard is formally defined by IEEE Std 1003.1, 2004 Edition. The mutex is part of the core POSIX Threads (pThreads) specification (historically referred to as IEEE Std 1003.1c-1995).
POSIX also supports both semaphores and priority-inheritance mutexes as part of what are called Feature Groups. Support for these Feature Groups is optional, but when an implementation claims that a feature is provided, all of its constituent parts must be provided
and must comply with this specification. There are two main Feature Groups of interest, the Realtime Group and Realtime Threads Groups.

The semaphore is not part of the core standard but is supported as part of the Realtime Feature Group. The Realtime Semaphore is an implementation of the Counting semaphore.

The default POSIX mutex is non-recursive , has no priority inheritance support or death detection.
However, the Pthreads standard allows for non-portable extensions (as long as they are tagged with “-np”).  A high proportion of programmers using POSIX threads are programming for Linux. Linux supports four different mutex types through non-portable extensions:

  • Fast mutex                  – non-recursive and will deadlock [default]
  • Error checking mutex – non-recursive but will report error
  • Recursive mutex        – as the name implies
  • Adaptive mutex         - extra fast for mutli-processor systems

These are extreamly well covered by Chris Simmonds in his posting Mutex mutandis: understanding mutex types and attributes.

Finally the Realtime Threads Feature Group adds mutex support for both priority inheritance and priority ceiling protocols.

Win32 API
Microsoft Window’s common API is referred to as Win32. This API supports three different primitives:

  • Semaphore            – The counting semaphore
  • Critical Section     - Mutex between threads in the same process; Recursive, no timeout, queuing order undefined
  • Mutex                    – As per critical sections, but can be used by threads in different processes; Recursive, timeout, queuing order undefined

The XP/Win32 mutex API does not support priority inheritance in application code, however the WinCE/Win32 API does!

Win32 mutexes do have built-in death detection; if a thread terminates when holding a mutex, then that mutex is said to be abandoned. The mutex is released (with WAIT_ABANDONED error code) and a waiting thread will take ownership. Note that Critical sections do not have any form of death detection.

Critical Sections have no timeout ability, whereas mutexes do. However Critical Sections support a separate function call TryEnterCriticalSection. A major weakness of the Win32 API is that the queuing model is undefined (i.e. neither Priority nor FIFO). According to Microsoft this is done to improve performance.

So, what can we gather from this? First and foremost the term mutex is less well defined than the semaphore. Secondly,the actual implementations from RTOS to RTOS vary massively. I urge you to go back and look at your faviourite RTOS and work out what support, if any, you have for the mutex. I’d love to hear from people regarding mutual exclusion support (both semaphores and mutexes) for their RTOS of choice. If you’d like to contact me do so at nsc(at)acm.org.

Finally, Part 3 will look at a couple of problems the mutex doesn’t solve, and how these can be overcome. As part of that it will review the Basic Priority Inheritance Protcol and the Prority Ceiling Protocol.

At a later date I will also address the use of, and problems associted with, the semaphore being used for task synchronisation.

ENDNOTES

  1. Please I do not want to get into the “that’s not a real-time OS” debate here – let’s save that for another day!
  2. A number of people pointed out that Michael Barr (former editor of Embedded Systems Programming, now president of Netrino) has a good article about the differences between mutexes & semaphores at the following location: http://www.netrino.com/node/202. I urge you to read his posting as well.
  3. Apologies about not having the atom feed sorted – this should all be working now

Mutex vs. Semaphores – Part 1: Semaphores

September 7th, 2009

It never ceases to amaze me how often I see postings in newsgroups, etc. asking the difference between a semaphore and a mutex. Probably what baffles me more is that over 90% of the time the responses given are either incorrect or missing the key differences. The most often quoted response is that of the “The Toilet Example (c) Copyright 2005, Niclas Winquist” . This summarises the differences as:

  • A mutex is really a semaphore with value 1

No, no and no again. Unfortunately this kind of talk leads to all sorts of confusion and misunderstanding  (not to mention companies like Wind River Systems redefining a mutex as a “Mutual-Exclusion Semaphore” – now where is that wall to bang my head against?).

Firstly we need to clarify some terms and this is best done by revisiting the roots of the semaphore. Back in 1965, Edsger Dijkstra, a Dutch computer scientist, introduced the concept of a binary semaphore into modern programming to address possible race conditions in concurrent programs. His very simple idea was to use a pair of function calls to the operating system to indicate entering and leaving a critical region. This was achieved through the acquisition and release of an operating system resource called a semaphore. In his original work, Dijkstra used the notation of P & V, from the Dutch words Prolagen (P), a neologism coming from To try and lower, and Verhogen (V) To raise, To increase.

With this model the first task arriving at the P(S) [where S is the semaphore] call gains access to the critical region. If a context switch happens while that task is in the critical region, and another task also calls on P(S), then that second task (and any subsequent tasks) will be blocked from entering the critical region by being put in a waiting state by the operating system. At a later point the first task is rescheduled and calls V(S) to indicate it has left the critical region. The second task will now be allowed access to the critical region.

A variant of Dijkstra’s semaphore was put forward by another Dutchman, Dr. Carel S. Scholten. In his proposal the semaphore can have an initial value (or count) greater than one. This enables building programs where more than one resource is being managed in a given critical region. For example, a counting semaphore could be used to manage the parking spaces in a robotic parking system. The initial count would be set to the initial free parking places. Each time a place is used the count is decremented. If the count reaches zero then the next task trying to acquire the semaphore would be blocked (i.e. it must wait until a parking space is available). Upon releasing the semaphore (A car leaving the parking system) the count is incremented by one.

Scholten’s semaphore is referred to as the General or Counting Semaphore, Dijkstra’s being known as the Binary Semaphore.

Pretty much all modern Real-Time Operating Systems (RTOS) support the semaphore. For the majority, the actual implementation is based around the counting semaphore concept. Programmers using these RTOSs may use an initial count of 1 (one) to approximate to the binary semaphore. One of the most notable exceptions is probably the leading commercial RTOS VxWorks from Wind River Systems. This has two separate APIs for semaphore creation, one for the Binary semaphore (semBCreate) and another for the Counting semaphore (semCCreate).

Hopefully we now have a clear understanding of the difference between the binary semaphore and the counting semaphore. Before moving onto the mutex we need to understand the inherent dangers associated with using the semaphore. These include:

  • Accidental release
  • Recursive deadlock
  • Task-Death deadlock
  • Priority inversion
  • Semaphore as a signal

All these problems occur at run-time and can be very difficult to reproduce; making technical support very difficult.

Accidental release

This problem arises mainly due to a bug fix, product enhancement or cut-and-paste mistake. In this case, through a simple programming mistake, a semaphore isn’t correctly acquired but is then released.

When the counting semaphore is being used as a binary semaphore (initial count of 1 – the most common case) this then allows two tasks into the critical region. Each time the buggy code is executed the count is increment and yet another task can enter. This is an inherent weakness of using the counting semaphore as a binary semaphore.

Deadlock

Deadlock occurs when tasks are blocked waiting on some condition that can never become true, e.g. waiting to acquire a semaphore that never becomes free. There are three possible deadlock situations associated with the semaphore:

  • Recursive Deadlock
  • Deadlock through Death
  • Cyclic Deadlock (Deadly Embrace)

Here we shall address the first two, but shall return to the cyclic deadlock in a later posting.

Recursive Deadlock

Recursive deadlock can occur if a task tries to lock a semaphore it has already locked. This can typically occur in libraries or recursive functions; for example, the simple locking of malloc being called twice within the framework of a library. An example of this appeared in the MySQL database bug reporting system: Bug #24745 InnoDB semaphore wait timeout/crash – deadlock waiting for itself

Deadlock through Task Death

What if a task that is holding a semaphore dies or is terminated? If you can’t detect this condition then all tasks waiting (or may wait in the future) will never acquire the semaphore and deadlock. To partially address this, it is common for the function call that acquires the semaphore to specify an optional timeout value.

Priority Inversion

The majority of RTOSs use a priority-driven pre-emptive scheduling scheme. In this scheme each task has its own assigned priority. The pre-emptive scheme ensures that a higher priority task will force a lower priority task to release the processor so it can run. This is a core concept to building real-time systems using an RTOS. Priority inversion is the case where a high priority task becomes blocked for an indefinite period by a low priority task. As an example:

  • An embedded system contains an “information bus”
  • Sequential access to the bus is protected with a semaphore.
  • A bus management task runs frequently with a high priority to move certain kinds of data in and out of the information bus.
  • A meteorological data gathering task runs as an infrequent, low priority task, using the information bus to publish its data. When publishing its data, it acquires the semaphore, writes to the bus, and release the semaphore.
  • The system also contains a communications task which runs with medium priority.
  • Very infrequently it is possible for an interrupt to occur that causes the (medium priority) communications task to be sch
    eduled while the (high priority) information bus task is blocked waiting for the (low priority) meteorological data task.
  • In this case, the long-running communications task, having higher priority than the meteorological task, prevents it from running, consequently preventing the blocked information bus task from running.
  • After some time has passed, a watchdog timer goes off, notices that the data bus task has not been executed for some time, concludes that something has gone drastically wrong, and initiate a total system reset.

This well reported event actual sequence of events happened on NASA JPL’s Mars Pathfinder spacecraft.

Semaphore as a Signal

Unfortunately, the term synchronization is often misused in the context of mutual exclusion. Synchronization is, by definition “To occur at the same time; be simultaneous”. Synchronization between tasks is where, typically, one task waits to be notified by another task before it can continue execution (unilateral rendezvous). A variant of this is either task may wait, called the bidirectional rendezvous. This is quite different to mutual exclusion, which is a protection mechanism. However, this misuse has arisen as the counting semaphore can be used for unidirectional synchronization. For this to work, the semaphore is created with a count of 0 (zero).

Note that the P and V calls are not used as a pair in the same task. In the example, assuming Task1 calls the P(S) it will block. When Task 2 later calls the V(S) then the unilateral synchronization takes place and both task are ready to run (with the higher priority task actually running). Unfortunately “misusing” the semaphore as synchronization primitive can be problematic in that it makes debugging harder and increase the potential to miss “accidental release” type problems, as an V(S) on its own (i.e. not paired with a P(S)) is now considered legal code.

In the next posting I shall look at how the mutex address most of the weaknesses of the semaphore.

%d bloggers like this: