Embedded Software - How Complex Can it Get?

December 30, 2020


Embedded Software - How Complex Can it Get?

It seems that a day goes by when someone doesn’t say, “This is like science fiction!” Typically, they are talking about cell phones, GPS, tablets, cars with keyless ignition…the list goes on.

It seems that a day goes by when someone doesn’t say, “This is like science fiction!” Typically, they are talking about cell phones, GPS, tablets, cars with keyless ignition…the list goes on. Only this morning, I was using Apple Pay to get my breakfast and the server smiled and said, “Love technology …”.

These are all embedded systems, or the close relatives thereof, and they are very complex.

This complexity is very interesting and makes people wonder, “Where will it all end?” Even the short term can be hard to predict. For example, I have an iPad Air 3, which I think is wonderful. How can Apple usefully improve it? I have no real idea what they can do next, but I bet they will surprise me in due course.

Historically, embedded systems were very simple: 8-bit CPUs with just a few K of memory. Although such simple systems are still developed, many, more resource-rich devices are now in use with one or more 32-bit processors and many megabytes of memory. The enormous power of these devices results in increasing size and complexity of software. But what are the limits to this complexity?

If we look at mechanical systems, there is scope for a lot of complexity. The most complex machine yet created was the Space Shuttle orbiter, which had a million moving parts. Considering how much of a design challenge the vehicles were, they worked remarkably well. I guess there is no intrinsic reason why a more complex machine could not be created.

In my lifetime, electronic systems have steadily become more complex, and smaller. In the 1950s, a complex electronic device might have 100 (discrete) transistors - it might even have employed vacuum tubes. Fast forward to today and a few billion transistors on a chip is not uncommon. Every year the geometry of chips reduces in size to fewer and fewer nanometers. But there is a theoretical limit: I do not believe that a transistor (or a circuit element of any kind) could be smaller than a single atom. But what do I know?

Software complexity has grown drastically over the last 50-60 years mirroring hardware progress. I guess that a bit is the smallest “unit” of software and, measured this way, software complexity has left mechanical design way behind and I suppose is one or two orders of magnitude ahead of hardware. However, I cannot see any specific limits to the theoretical complexity/size of software. Making big memory chips is easy enough, so we can just make code bigger and bigger.

Of course, design is the bottleneck. Hardware design is very challenging and requires sophisticated electronic design automation (EDA). Software development is just that bit harder. But it takes more than tools.

In almost all aspects of life (embedded software included) there are essentially 3 ways to address a bigger challenge:

         1. work harder (i.e., more manpower)

         2. work longer

         3. work smarter

Sometimes (1) and (2) may be interchangeable. On a building construction project, for example, more labor may speed up the job. But software development exhibits a rapid diminishing of returns, if simply more personnel are assigned to the job. This is largely because of the interrelationships between different parts of the code and the consequential need for developers to communicate. They end up spending more time communicating than coding. To some extent, the intelligent expansion of a development team can yield benefits. This entails the identification of specific expertise requirements and assigning staff accordingly. This is particularly pertinent to embedded development, where expertise domains might be: application-level code, driver development, OS configuration, networking, UI design etc.

Working smarter sounds like a company’s tagline (which I seem to recall it was some years ago), but what I mean is giving developers the ability to create and debug more functionality in a given time period. Broadly, there are two (not unrelated) approaches:

  • code at a higher level of abstraction
  • reuse existing code (and make reusable code)

A higher level of abstraction means moving away from conventional languages, like C, and embracing other paradigms. UML is one possibility. Re-usable code is largely the domain of object-oriented programming (OOP) techniques. Re-usable objects can be created, and OOP enables the encapsulation of expertise - again particularly useful for embedded.

Software & OS