Posted by Andrey Batutin, Senior iOS Developer at DataArt
Today everyone is doing Machine Learning, including our phones. Yes, soon your phone will become truly smart. Well, or at least smarter than you and me. Why? Now let’s figure it out. Let’s start with the hardware already installed on mobile devices.
All major manufacturers of mobile System-on-chip (SoC) have been actively adding ML-specific hardware to their solutions for 3-4 years. If you have a top-end Android or iPhone, next to the GPU you are almost guaranteed to have a chip specially tailored for ML tasks.
Mostly ML chips are designed for Computer Vision, Audio, Camera / Photo / Video. The main tasks: reducing noise in the photo, the best quality with zoom, face and animal recognition, speech recognition and text synthesis.
In addition, the main vendors provide an SDK that gives third-party developers access to this ML chip. So you can make your Snapchat or Siri, anyway you are probably sitting at home (When will it all end ?! How do I want in KFC!).
Major mobile chip vendors are actively expanding their SoC architectures with the Neural Processing Unit (NPU), Digital Signal Processor (DSP) and specialized AI cores. These components are specially sharpened for the operation of ML-models.
ML Hardware: DSP + GPU
SDK: Qualcomm Neural Processing SDK
Provided by Qualcomm Neural Processing SDK, which provides hardware acceleration of ML-models on a bunch of DSP + GPU + CPU for Snapdragon chips. DSP is tuned for working with audio / video: a smart camera, cleaning the picture from noise, the best quality at zoom and similar tools to improve sound quality.
HiSilicon / Huawei
ML Hardware: NPU
SDK: HiAI SDK
Da Vinci NPU consists of three cores: two high-performance, and one energy-efficient core for ML-computing.
HiAI SDK gives access to ground for operations on NPU matrices. That is, the NPU is ideal for Deep Neural Network models. Of the goodies is the plug-in for Android Studio.
HiAI is sharpened for:
- Computer Vision
- Automatic Speech Recognition;
- Natural Language Understanding.
ML Hardware: APU + GPU
SDK: NeuroPilot SDK
NeuroPilot SDK allows you to solve ML-tasks using AI Processing Unit (APU) + GPU. APU is imprisoned under the Deep Neural Network model. Provides hardware acceleration for convolutions, fully connected layers, activation functions, etc.
NeuroPilot 2.0 allows in real time:
- Multi-person pose tracking;
- 3D pose tracking;
- multiple object identification;
- semantic segmentation;
- image enhancement.
ML Hardware: NPU
SDK: Samsung Neural SDK / EDEN SDK
Samsung has added a specialized NPU to Exynos SoC. It consists of two Multiply – accumulate units, sharpened for operations on matrices.
It also provides the Samsung Neural SDK, which provides hardware acceleration for ML models using a combination of CPU + GPU + NPU.
ML Hardware: NPU
SDK: CoreML SDK
Starting with Apple A11, Bionic uses NPU. The A13 NPU has grown to eight cores. One of the killer-features of the new NPU is Deep Fusion image processing: we take 9 shots and combine them into one. Especially relevant for night shooting.
Provides CoreML SDK. As in other SDKs, it provides hardware acceleration for ML models. From the buns – Create ML – UI-only environment for training ML-models. Allows you to train:
- Object detection / classifier;
- Sound classifier;
- Motion classifier;
- Text classifier / word tagging;
- Tabular Classifier;
- Recommendation engine.
In the continuation of the material, we will examine in more detail how NPU works. Let’s talk about the software needed for mobile ML and how to use it.