Debunking priority

Before I start, a disclaimer:

For the purposes of this article I’m limiting the discussion to even-driven systems on priority-based, pre-emptive operating systems, on single processors.

I’m using the word task to mean ‘unit of execution’ or ‘unit of schedulability’, in preference to thread or process. I’m ignoring the different OS memory models.

There seems to be a fair amount of misunderstanding about the concept of priority in concurrent programming:

  • Priority means one task is more ’important’ than another.
  • Priority allows one task to pre-empt another (True, but why is this important?)
  • Priority means one task runs more often than another.
  • Changing priorities will fix race conditions between tasks.

Here are some other viewpoints on the subject.

(https://www.codinghorror.com/blog/2006/08/thread-priorities-are-evil.html)

(https://stackoverflow.com/questions/95649/when-should-i-consider-changing-thread-priority)

The general consensus seems to be that arbitrarily adjusting a task’s priority is ‘bad’. However, there’s not a lot of useful concrete information on why you should adjust a task’s priority.

Task priority should be thought of as a measure of the ‘determinism of latency of response’ for that task. That is, the higher the priority of a task (relative to its peers) the more predictable (deterministic) its latency of response is. 
To understand this let’s consider an example system with a number of tasks, each with a different priority. All of the tasks may pend on some shared (protected) resource or event. 
In scenario 1, only the highest priority task is available to run. When it pends on the resource/event it gets it ‘immediately’ – that is, with the minimum possible latency (there will always be some processing overhead) 
In scenario 2, only the lowest priority task is available to run. When it pends on the resource/event it also gains access with the smallest possible delay. In other words, in this scenario its latency of response is exactly the same as the highest priority task! 
However, in scenario 3, things change. In this scenario let’s have all our tasks available to run and attempt to access/respond to the resource/event. In this case, the highest priority task (by definition) gets access first. It’s latency of response is the same (give or take) as when there are no other tasks running. That is, it has the most predictable latency (it’s almost constant). 
However, the lowest priority task must wait until all other pending tasks have finished. Its latency is: minimum + task1 processing time + task2 processing time + task3 processing time +… 
So, for the low priority task the latency is anywhere from the minimum up to some (possibly unpredictable) maximum. In fact, if we’re not careful, our highest priority task may be ready to access again before the lowest priority task has even had its first access – so-called ‘task starvation’.

image

A task’s priority will affect its worst-case latency – the higher the priority the more predictable the latency becomes.


If all your tasks run at the same priority you effectively have no priority.  Most pre-emptive kernels will typically have algorithms such as time-slicing between equal-priority tasks to ensure every task gets a ‘fair share’.

So, why might I want to adjust my tasks’ priorities? Let’s take a common embedded system example: a pipe-and-filter processing ‘chain’.

The basic premise has a task pending on input events/signals from the environment. These are passed through a chain of filter tasks, via buffer ‘pipes’. The pipes are there to cope with the differences of processing speed of each filter task and the (quite likely) ‘bursty’ nature of event arrival.

In a system with fast-arriving, or very transient events, we may wish to increase the priority of front-end of the filter chain to avoid losing events.

Increasing the priority of the back-end of the filter chain favours throughput over event detection.

In each case the pipes must be sized to accommodate the amount of data being stored between filters. Ideally, we want to avoid the buffers becoming flooded (in which case the filter chain runs at the speed of the slowest filter)

image

Adjusting task priorities to achieve system performance requirements

However, all is not necessarily rosy. Your careful-tuned system can be disrupted by introducing (either explicitly or through some third-party or library code you have no control of) code with its own tasks.

image

Introducing new code (explicitly or implicitly) can disrupt system performance

The introduction of (in this case) another medium-priority task may slew the latency predictability of our original medium-priority task. For example, what happens if the new task runs for a significant period of time? It cannot be pre-empted by our filter task. If we are unlucky (and we so often are!) this can cause our system to stop meeting its performance requirements – even though there is no change in the original code! (I’ll leave it as an exercise for the reader to consider the impact of this for reusable code…)

Finally, a huge caveat for multi-processor systems: Priority only has meaning if the number of tasks exceeds the number of processors. Consider the extreme case where each task has its own processor. Each task, being the only task waiting to execute, will execute all the time. Therefore it is always at the highest priority (on that processor)

If your design assigns multiple tasks to multiple processors then you must appreciate (and account for) the fact that priorities only have meaning on each individual processor. Priority no longer becomes a system-wide determinant.

Glennan Carnie
Dislike (0)
Website | + posts

Glennan is an embedded systems and software engineer with over 20 years experience, mostly in high-integrity systems for the defence and aerospace industry.

He specialises in C++, UML, software modelling, Systems Engineering and process development.

About Glennan Carnie

Glennan is an embedded systems and software engineer with over 20 years experience, mostly in high-integrity systems for the defence and aerospace industry. He specialises in C++, UML, software modelling, Systems Engineering and process development.
This entry was posted in Design Issues, RTOS and tagged , , , , , . Bookmark the permalink.

Leave a Reply