A fresh look at kernel ticks, part 2: Frequency

January 28, 2015

A fresh look at kernel ticks, part 2: Frequency

Read part 1: The tick handler is not the scheduler Many RTOS kernels offer developers lots of flexibility in setting up the kernels' periodic tick int...

Read part 1: The tick handler is not the scheduler

Many RTOS kernels offer developers lots of flexibility in setting up the kernels’ periodic tick interrupts. Unfortunately, this flexibility sometimes leads to confusion. One configurable aspect of the ticks that seems to be at the root of many questions is frequency. I’ll attempt to dispel a common myth relating to frequency and attempt to explain the basic tradeoffs involved in establishing a system’s tick rate.

Perhaps due to restrictions that exist in other kernels or on specific hardware platforms, there seems to be a widely shared understanding that µC/OS-II and µC/OS-III limit the range of tick frequencies available to application code. The kernels themselves, however, should be able to support any tick frequency that’s viable on a given MCU. I’ve seen applications running with tick rates well below 100 Hz and, at the other end of the spectrum, well over 1 kHz.

If the kernels don’t have any special influence over a system’s tick frequency, what factors should you consider when setting this parameter? Aside from the limitations imposed by the peripheral that will produce the ticks, your main concerns should be overhead and resolution. With a relatively high frequency, you’ll be able to establish delays in smaller increments than would otherwise be possible, but you’ll pay for this ability with increased overhead in the form of CPU time spent processing the ticks. A lower frequency would lead to reduced tick-processing time but would also, of course, limit the resolution of your system’s delays. In a system with ticks occurring once every 10 ms, for example, the kernel would not be able to provide a delay as small as 1 ms.

To strike the right balance between overhead and resolution, you’ll need to consider your hardware platform’s capabilities and your application’s timing needs. Using µC/OS-II or µC/OS-III as an example, on a 32-bit processor running at 300 MHz, the overhead required for either kernel to process 1,000 ticks per second would likely not exceed 1 percent of the CPU’s cycles. However, a 16-bit MCU with a 24 MHz clock could be a different story. Likewise, an application that only uses time delays to poll for button presses probably wouldn’t experience any issues with a 50 ms tick resolution, but such a setting could be unacceptable for a task with tighter deadlines.

In regard to this final point, it’s important to note that ticks might not be the best solution for all the delays in your system. If, for example, you want to read from an A/D converter every 500 µs, then the best approach would likely be to make your converter interrupt-driven and to trigger the conversions with a timer (one not associated with the tick interrupt). In other words, the tick-based functions are intended to be used for gross delays – as might be required by, for example, a status task responsible for outputting a message approximately every 10 ms – and you should turn to dedicated, hardware timers when more-accurate delays are required. I’ll provide further details relating to this topic in Part 3, where I explain another sometimes confusing aspect of kernel ticks: priority.

Matt Gordon is a senior applications engineer at Micrium. He began his career developing device drivers, kernel ports, and example applications for Micrium’s customers. Drawing on this experience, Matt has written multiple articles on embedded software. He also heads Micrium’s training program and regularly teaches classes on real-time kernels and other embedded software components. Matt holds a bachelor’s degree in computer engineering from Georgia Institute of Technology, Georgia.

Matt Gordon, Micrium
Categories
Software & OS