Choosing a processor is a multifaceted process
February 01, 2013
Story
The days are over when selecting a processor was a relatively simple task, in light of today’s converged processing paradigm. But examining a fe...
Selecting an embedded processor used to be a pretty straightforward task. Of course, this was back in “the old days,” when the focus was on a limited set of functions, user interface and connectivity didn’t matter too much, and power consumption wasn’t such an overarching issue. In today’s realm of converged processing, where a single device can perform control, signal processing, and application-level tasks, there’s a lot more to consider (Figure 1). While there are too many aspects of the processor selection process to detail here, let’s examine some of the more prominent areas that system designers must consider.
Processor performance
System designers reflexively note the processing speed of a device as a major indicator of its performance. This is not a bad start, but it’s an incomplete assessment. It is clearly important to evaluate the number of instructions a processor can perform each second, but also to assess the number of operations accomplished in each core clock cycle and the efficiency of the computation units. And it is no longer uncommon to employ processors with multiple cores as a way of greatly extending the computational capabilities of the device (especially in the case of homogenous cores) or clearly demarcating the control processing from the signal processing activity (often with heterogenous cores).
Hardware acceleration
Of course, it’s not just about the processor core(s). For execution of well-specified functionality, a hardware accelerator is almost always the most power-efficient method to perform the function it was designed to accelerate. One area that can make the difference in using the accelerator is how friendly it is to use in a software algorithm. For full-algorithm-type accelerators, such as an H.264 encoder, there usually is not an issue because it’s substantially self-contained. However, for kernel-type accelerators like an FFT, it can be more challenging to use an accelerator within a larger algorithm. Take a look at how the hardware function performs and how it needs to be configured.
Bandwidth requirements
Bandwidth estimation is a process that’s easy to oversimplify, sometimes with unfortunate results. All individual data flows in the system must be summed (with directionality and time window taken into account) to ensure that the core is capable of completing its data processing within the allotted window, and that the various processor buses are not overloaded, leading to data corruption or system failure. For example, for a video decoder, designers need to first account for reading the data that needs to be decoded. Then, it is necessary to incorporate the many data passes required to create the decoded frame sequence. This may involve multiple buffer transfers between internal and external memories. Finally, designers must account for the streaming of the display buffer to the output device.
After all data flows are considered, the overall system budget needs to be constructed. This budget is influenced by several factors, including DRAM access patterns (and resulting performance degradations), internal bus arbitration, memory latencies, and so on.
Power management
The ability to throttle power consumption to a level commensurate with temporal operating requirements is crucial to preserving battery life, as well as overall energy costs in mains-powered systems. Processors can offer a wide range of options for optimizing an application’s power profile. One such feature is dynamic power management – the ability to adjust core frequency and operating voltage to meet a certain performance level. Another is the availability of multiple power modes that turn off various unneeded resources, including memories and peripherals, during certain time intervals. System wakeup (through general-purpose I/O, a real-time clock, or another stimulus) is an integral part of this power mode control. Yet another degree of flexibility in power management is the presence of multiple voltage domains for core, I/O, and memories, allowing different system components to operate at lower voltages when practical.
Security needs
Over the past several years, processor security has become increasingly important. Whether or not such a scheme is a baseline requirement of a system, it is essential to view the security question from multiple vantage points before deciding on the final direction. Security needs usually take the form of platform protection, IP security, or data security – or some combination of all three.
Platform protection is needed to ensure that only authenticated code is run in the application. In other words, must “rogue code” be actively prevented from running? By “rogue code,” we refer to a program that tries to access protected information on the processor, or “hijack” the processor and gain control of the larger system. Platform protection can be implemented with a variety of techniques, and there are always trade-offs to consider in the selection. As with any trade-off, there is a cost implication as the protection levels increase. Another important consideration is the ease-of-use of the overall security scheme, both in development and in production.
The ability to authenticate code is also critical to securing IP and data. IP security requires a way to either encrypt the code image brought into the processor for execution, or to store this IP internal to the processor through embedded flash or an internal ROM inaccessible through external mechanisms. Some form of data security is required to ensure that data enters and exits the system without being compromised. In some cases, especially in lower-end microcontrollers, security may be handled completely with embedded flash, but on higher-end processors, where the application is loaded in through a boot loader, the scheme may be more complex.
Safety and fault tolerance
There are many applications where safety is clearly a main concern, for example, an automotive driver assistance system or a closed-loop power control system. However, currently designers of other not-so-obvious applications are starting to care more about increasing levels of operational robustness. This is especially true as processors are built in smaller silicon geometries, such as 28 nm or 40 nm, for example, where soft errors in memory can impact operations because of naturally occurring events, including alpha and gamma particles. During the processor selection process, it’s important to examine how a processor handles these types of errors, as well as how it responds to unexpected events in general. What steps can it take when an error occurs? How does it signal to other system components that something has gone wrong?
Debugging capabilities
As applications become more complex, so does the development process. Shortcuts that worked in the past might not work when the number of processor and application subcomponents has grown exponentially. Consider the system-level debug of a large software-based system that uses an operating system or real-time kernel. Do the processor and its tool chain have a way to examine the processor state without impacting the application? Is it possible to profile and trace where the processor has been, or to trap on all events of interest? All these questions, and many more, should be answered before becoming comfortable with the level of debugging available.
System cost
At times, system designers focus on the processor price tag instead of the overall system design cost. It is imperative to take into account not only the device cost itself, but also cost of the supporting circuitry required – level translators, interface chips, glue logic, and so on. Also, package options play a vital role: One processor’s package might allow a four-layer board design, while another’s may necessitate an expensive six- or eight-layer board because of routing challenges. Finally, don’t overlook the value of extra processing headroom that can allow for future expandability without causing an expensive processor change or board spin.
Signal chain
One final note: Processor selection should occur in tandem with a study of a system’s signal chain requirements. Does the processor vendor also sell peripherals that connect to the processor? It is often advantageous to buy multiple system components from the same vendor – for interoperability, customer support, and overall pricing benefits.
Ready to choose a processor?
As mentioned, there are many other facets to consider during the processor selection phase, but the considerations described here should provide a good basis for embarking on this crucial process. Vendors such as Analog Devices offer a wide range of processors and other components that meet the described selection criteria.
Analog Devices, Inc. www.analog.com/processors
Follow: Twitter Blog Facebook Google+ LinkedIn YouTube