When to use Machine Learning or Deep Learning?

By Seth DeLand

Product Marketing Manager

MathWorks

October 15, 2019

Story

When to use Machine Learning or Deep Learning?

In both machine learning and deep learning, engineers use software tools to enable computers to identify trends and characteristics in data by learning from an example data set.

Understanding which AI technologies to use to advance a project can be challenging given the rapid growth and evolution of the science. This article outlines the differences between machine learning and deep learning, and how to determine when to apply each one. 

Definitions: Machine Learning vs. Deep Learning

In both machine learning and deep learning, engineers use software tools, such as MATLAB, to enable computers to identify trends and characteristics in data by learning from an example data set. In the case of machine learning, training data is used to build a model that the computer can use to classify test data, and ultimately real-world data. Traditionally, an important step in this workflow is the development of features – additional metrics derived from the raw data – which help the model be more accurate.

Deep learning is a subset of machine learning, where engineers and scientists skip the manual step of creating features. Instead, the data are fed into the deep learning algorithm and it automatically learns what features are most useful to determine the output.

  • Machine Learning: A branch of artificial intelligence where engineers and scientists manually select features within the data and train the model. Common machine learning algorithms include decision trees, support vector machines, neural networks, and ensemble methods.
  • Deep Learning:  A branch of machine learning modeled loosely on the neural pathways of the human brain where the algorithm automatically learns what features are useful. Common deep learning algorithms include convolutional neural networks (CNNs), recurrent neural networks, and deep Q networks.  

Project Profile

Machine learning is typically used for projects that involve predicting an output or uncovering trends. In these examples, a limited body of data is used to help the machines learn patterns that they can later use to make a correct determination on new input data. Common algorithms used in machine learning include linear regression, decision trees, support vector machines (SVMs), naïve Bayes, discriminant analysis, neural networks and ensemble methods.

Deep learning is more complex and is typically used for projects that involve classifying images, identifying objects in images, and enhancing images and signals. In these instances, a deep neural network can be applied, as they are designed to automatically extract features from spatially- and temporally-organized data such as images and signals. Common algorithms used in deep learning include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and reinforcement learning (deep Q networks).

Machine learning algorithms may be more desirable if you need quicker results. They are faster to train and require less computational power. The number of features and observations will be the key factors that affect training time. Engineers applying machine learning should expect to spend a majority of their time developing and evaluating features to improve model accuracy.

Deep learning models will take time to train. Pretrained networks and public datasets can shorten training through transfer learning, but sometimes these can be complicated to implement. In general, deep learning algorithms can take anywhere from a minute to a few weeks to train depending on your hardware and computing power. Engineers applying deep learning should expect to spend a majority of their time training models and making modifications to the architecture of their deep neural network.

Consideration for Choosing Machine Learning vs. Deep Learning

Data Considerations

Understanding the available dataset can help determine whether machine learning or deep learning should be applied for a given task.

Generally, machine learning is used when there is more limited, structured data available. Most machine learning algorithms are designed to train models to tabular data (organized into independent rows and columns). If the data are non-tabular, machine learning can be applied, but it does require some data manipulation – i.e. sensor data can be converted into a tabular representation by extracting windowed features using common statistical metrics (mean, median, standard deviation, skewness, kurtosis, etc.), and then used with traditional machine learning techniques.

Deep Learning typically requires a large quantity of training data to ensure that the network, which may very well have tens of millions of parameters and does not overfit the training data. Convolutional neural networks are designed to operate on image data, although they can be used on sensor data as well by performing a time-frequency calculation such as a spectrogram on the signal. Recurrent neural networks such as LSTM (Long Short-Term Memory) networks are designed to operate on sequential data such as signals and text. ­­­­

Available Hardware and Deployment

Determining which AI approach should be applied is also contingent on available hardware. 

Machine learning algorithms require less computational power. For example, desktop CPUs are sufficient for training these models.

For deep learning models, specialized hardware is typically required due to the higher memory and compute requirements. Specialized hardware is also appropriate because the operations performed within a deep neural network, such as convolutions, lend themselves well to the parallel architecture of the GPU.

Deep learning models take significant computing power. They should be considered if GPUs are available, or if there is time to run trainings on a CPU (which will take significantly longer).

Training deep learning models on clusters or the cloud has gained popularity with deep learning due to the high costs associated with obtaining the GPUs. This option lets the hardware be shared by several researchers.

Deployment to embedded GPUs has also gained popularity, as it can provide fast inference speed in the deployed environment. GPU Coder enables code generation from deep learning models in MATLAB that leverages optimized libraries from Intel, NVIDIA and Arm. With GPU Coder Support Package for NVIDIA GPUs, you can cross-compile and deploy the generated CUDA code as a standalone application on an embedded GPU.

Guidelines for an Evolving Science 

While there will always be trial and error, the above can help guide decision making and accelerate the overall design process for engineers and scientists new to machine learning and deep learning. By understanding the differences between machine learning and deep learning, knowing the end application of their project and factoring in data and hardware availability, design teams will gain faster insight into which approach fits best for their respective projects.

Product Marketing: Developed presentations, demos, and web content for MATLAB Optimization Products. Also created several videos and webinars for awareness creation and demand generation.

More from Seth