Machine learning starts with the algorithms
December 05, 2017
Throwing more processing at the problem might work, but that's probably not a path you want to take.
There are lots of different ways to look at machine learning, which is the ability for a computing device to make decisions based on actions and conditions. Some look at it from the very starting point, the initial software and algorithms that are run on the hardware to make the whole process work.
Some areas that are currently taking advantage of machine learning include big data, like for SEO and other analytics. There’s also a lot of talk (with less action) in the industrial-automation space for predictive maintenance. For example, we put systems at the Edge to learn what normal behavior looks like, then monitor performance and raise a flag if something abnormal is observed.
It’s fair to say that the real key to accurate and useful machine learning consists of assembling the right combination of algorithms, compilers, and hardware architecture. If you don’t have the right components in any of those three areas, machine learning won’t work as it should. For example, if you don’t start with an algorithm that can be parallelized, you won’t get very far. Similarly, if your hardware doesn’t support parallelism to handle the intense computations, that’s a non-starter. And the compiler, which sits in the middle, must provide the right bridge.
A lot of the do’s and don’ts are still being worked out, as machine learning can be a very inexact science. Hence, lots of people are trying to develop the tools that address these issues. The real-time nature of the majority of machine-learning applications compounds the problem, making it significantly more difficult.
According to Randy Allen, Director of Advanced Research for Mentor Graphics, “Machine learning problems are going to boil down to a matrix multiplication. This consists of two phases, training and using. In the training, you generate a sequence of large matrix multiplications that These are continuously repeated.”
That’s why the combination of the three aspects outlined earlier is so significant. If there’s even a slight error somewhere in the sequence, it will be magnified over time, resulting in a large error, which is unacceptable in machine-learning applications.
To ensure that information is returned in real time, your choices may be to reduce the required precision, or increase the amount of processing power thrown at the problem. In general, neither of these options are good ones.
Going forward, we’ll see more application-specific, rather than general-purpose, models. Vision is a good example of that, where the hardware-software combination can be tuned to handle the vision algorithms. Also, we’ll see changes in what computations are handled at the Edge rather than the Cloud.
“It’s always the software that’s the big issue here,” says Allen. “Lots of people are coming up with hardware that takes lots of different approaches. That hardware is only useful if the programmer can get at it. And that’s where the compilers and algorithms come in. If you don’t have the right set of tools to go and utilize it, it doesn’t matter how good the hardware is.”
Mentor’s formula is to optimize things so you can work in a non-cloud environment by optimizing performance at the Edge. This can be achieved with what it calls “data-driven hardware.” And that doesn’t mean just throwing more processing power at the problem.
Note that Mentor will be hosting a webinar on how to optimize machine learning applications for parallel hardware. The company also provide s a fair amount of information on the topic on its site.
Allen adds, “We use an entirely different set of algorithms to optimize machine learning. And that’s not something the hardware guys typically consider when they’re developing an interface to the software. That’s where we can assist.”