Developing for autonomous driving is hard ? really hard
Lots of people are talking about autonomous driving, but the number of people actually developing systems around it is relatively small.
Lots of people are talking about autonomous driving, but the number of people actually developing systems around it is relatively small. There are many reasons for this; some have to do with liability issues and some have to do with the super-long design cycles. But frankly, the biggest reason many shy away from developing the technology is because it’s really hard to do.
Autonomous driving is defined in terms of levels. They range from Level 0, where there’s no automation, to Level 6, which is full automation. Today, at the high end, we’re around Level 3 or 4, with things like Tesla’s Autopilot capabilities and the auto-parallel park that’s becoming more common.
Some say it starts with the sensors. That group includes Jada Tapley, Delphi’s Vice President of Advanced Engineering. "But," she says, “if you don’t have the right architecture in place to enable those sensors and support all the additional software and computing power that’s necessary for a level four or five application, then it’s just fundamentally not going to work.”
Think of a car like you think of a human. As we get into the car, its sensors do the job of our eyes and ears. Those sensors could be in the form of cameras, radar, light detection and ranging (LIDAR), and so on. They gather data about what’s happening around the driver. The “nervous system” takes that data and sends it to the “brain,” which makes decisions based on that data.
What comprises the brain? Like the evolution of man, it’s gotten way smarter than it was in the past. It’s now at super-computer level, running millions of lines of code, and making decision in real time.
Every subsystem within the car is generating data and we have to decide which data is important, what needs to be sent to the car’s central brain, what needs to go up to the Cloud, and what should be discarded. That’s where Edge computing comes into play, especially as we’re trying to minimize what goes to the Cloud. This is critical for two reasons. First, it’s expensive to send data to the Cloud, which in the car, has to be done over cellular. Second, there’s a time delay to send information to the Cloud, have it processed, and return that information. Hence, something that requires a real-time response needs to be processed locally.
There are some benefits that most people are less aware of. For example, knowing the real-time traffic patterns could mean that the city changes the traffic-light patterns on the fly. Also, it could make the roads more efficient for emergency vehicles.
What is the right pipe to handle data going to the Cloud? Most developers agree that it should be 5G. Current 4G is an option, but it can be cost prohibitive. WiFi could be an option, but you’re limited to certain areas. The best answer is something that’s probably not here yet. But the critical part of that is making sure you can identify the right data because having the right data to transmit is step one.
There are a lot of expectations around 5G. The bar is being set quite high, and it’ll be fantastic if 5G can achieve everything that it’s pundits claim it’ll achieve. But it’s a wait and see right now.
According to Tapley, “We need a supercomputer to handle Level 4 and 5 applications. We’re already seeing a shift towards centralized intelligence put into domains in the vehicle, like safety, the cockpit, and propulsion. The car is kind of transforming into something that’s more like today’s conventional computer, where you pick the applications that run on it. It’s got a set number of processing power and memory. Then we can leverage that to run various applications based on what consumers want.”
As we sit on the eve of CES 2018, and a host of announcements, it’ll be interesting how many vendors attempt to tackle the difficult autonomous-driving problem.