AI, neural networks and classic machine learning algorithms have been around for decades, but the technology leapfrogged during the past three years.
AI, neural networks and classic machine learning algorithms have been around for decades, but the technology leapfrogged during the past three years.
( Source: gemeinfrei / Pexels )

Artificial Intelligence

Is AI ready for use in IoT, automotive and industry?

| Author/ Editor: Sebastian Gerstl / Jochen Schwab

Over the past three years, the practical applicability of artificial intelligence has improved by leaps and bounds. This development is driven particularly by its rapid spread in the IoT, industry and automotive sectors. Edge AI technologies are especially interesting for vision-based and voice-based purposes and the sensor-based detection of anomalies.

For more than six decades, AI was a concept of little interest to anyone but mathematicians – until, suddenly, the public took note. At least to some extent, the initial indifference was due to the purely theoretical nature of possible AI applications, which were perceived to sit firmly in the realm of science fiction. Before any use cases for AI could become reality, three conditions had to be met:

  • Good modeling of the properties and function of biological information processing
  • Very large, real data sets
  • The ability to process those vast data sets reasonably quickly

To many of us, this may still sound like science fiction. But artificial intelligence as we know it today is real, and it can even have a lasting, positive effect on our daily lives. While the cloud is considered the center of all AI technology, artificial intelligence is increasingly spreading into the periphery (or ‘edge’) of networks and connecting with the physical world, a concept known as edge AI. Experts like to compare AI with a biological map of the human brain, although there is still a long way to go before the comparison is accurate. But thanks to highly advanced training and learning methods, AI is, in some ways, ahead of the human brain. It has virtually unlimited storage capacity, for instance.

The edge AI paradigm is further driven by the production of nearly boundless volumes of data and the increasing availability of computing resources, even at the microcontroller level. Edge AI has the capacity to reduce or even eliminate latencies in the transmission of data to the cloud while allaying privacy concerns. Highly developed computing resources with various types of AI acceleration system play an important role in this. They should not be taken for granted. Ultimately, however, the key that will unlock the full potential of artificial intelligence is software. Provided sufficient computing resources are available, optimized algorithms, software tools and frameworks can support the technologies in question and facilitate the practical application of AI.

Software – the key to edge artificial intelligence

If edge AI is to be applied on a large scale and to a wide range of fields, it needs to be abstracted beyond its mathematical foundation. It should be simplified by means of cloud services used to train models and develop and realize inference machines through a user-friendly web interface, so that developers would no longer need to create complex mathematical algorithms. There are plenty of “programming” functions already, and there will be many more. The programming behind artificial intelligence, often referred to as “software 2.0”, is not based on conventional programming methods, however. It relies on the use of neural networks and traditional ML libraries, such as Google TensorFlow. Software 2.0 is about setting and optimizing parameters and their weighting (e.g. training models).

The ever-growing volume of available functions, most of which are open source, suggests that edge AI is on the rise. At the same time, the spread of software technologies for edge AI advances at an unstoppable pace. This is reflected at various levels: model frameworks, inference machines, the optimization of neural networks, conversion tools and data augmentation technologies (for training purposes) are just some of them.

Model formats

When it comes to popularity and function, TensorFlow is ahead of the game. It has become an industry standard. But its little brother, TensorFlow Lite, is slowly gaining an edge over it, especially in the mobile and edge fields. There are even tools that convert TensorFlow models into the TensorFlow Lite format. This should not be done carelessly, however: the Lite version does not support the entire extent of operations, which can cause malfunctions in some neural-network architectures.

Other converters support nearly all other frameworks, such as MXNet, PyTorch, Caffe 2, Keras and so on. They allow users to switch to their preferred formats, e.g. from TensorFlow to ONNX (Open Neural Net Exchange Format) or NNEF (Neural Network Exchange Format). Both are industry standards intended to reduce fragmentation.

Image 1: Example of a toolkit for using AI in edge devices.
Image 1: Example of a toolkit for using AI in edge devices.
( Source: NXP )

Inference machines have many open-source functions, depending on the intended application. For users working with ARM-based platforms, either mobile or embedded, ARM NN parses and translates frameworks for neural networks into a powerful inference machine and uses the ARM Compute Library (also open source) to take over the optimized software functions (see Image 2).

ARM NN has three ways of realizing neural-network models: firstly, connecting the model to ARM NN using high-level frameworks such as tensor flow. The software initially parses the model into a graphic format and highlights network operations that can be realized through the ARM Compute Library (ACL). Secondly, users can connect to an existing inference machine and take over suitable library functions from the ACL as needed. Finally, the application can access the ACL directly, but this requires a little more effort on part of the user.

Aspects of edge AI hardware

When designing edge AI hardware, there are three factors to take into consideration: cost, accuracy of decisions, and inference time. System developers working with an embedded design necessarily need to consider the cost. The factors of accuracy and inference time are mutually dependent when designing edge AI. Greater accuracy of the “decision” made by the system entails a longer training period, as larger data volumes are required for the training. Systems that require a high level of accuracy often also need more complex AI models. This increases the overall cost: it takes more powerful devices, more storage space and more energy. Inference time (i.e. user experience) is the time it takes the system to make a decision. The faster the decision is needed, the more power the system needs.

Image 2: ARM NN offers three methods of realizing a trained neural network
Image 2: ARM NN offers three methods of realizing a trained neural network
( Source: ARM )

AI developers therefore need to strike a balance between cost, accuracy and inference time. Example: For an application that detects pets entering the house, an inference time of 200–500 ms is acceptable. This is easily achieved using a powerful microcontroller. Doorbell security systems, which have become fairly widespread, need similar inference periods to those of a microwave. They feature a camera that captures the faces of approaching individuals; the edge AI system recognizes the person in under a second and classifies them as friend or foe (or unknown person). Accuracy is very important here: nobody wants their AI to lock them out of their own house or let a “foe” inside. Not only do greater accuracy and a larger number of classifications on which a decision can be made (e.g. certain foods, faces etc.) increase the need for storage space, they also require a more powerful CPU so they can perform enough computing processes to achieve acceptable inference times. Security door bells are a good example of products that are available in various qualities, from low-end to high-end products. The difference is the number of faces the device is able to detect within an acceptable time frame.

Certain life-or-death contexts, such as autonomous driving, have much higher requirements in edge AI systems. They require a large number of precise decisions made at the same time every second. Monitoring drivers by means of their own eyes is another example of an edge AI application. It involves a real-time function requiring extremely powerful computing processes carried out by a CPU. Depending on the application, then, edge AI systems have very different performance requirements that can be met with different CPUs, from MCUs to high-end application processors. Ultimately, however, software remains the key to machine learning at the network edge.

More and more options – and this is just the beginning

The greatest problem for the use of AI in edge devices is not complexity. It is the increasing number of functions added on a daily basis. We will always be dependent on mathematicians to oversee the complex functions of neural networks, such as the efficient optimization of highly advanced neural networks or the creation of better, faster training methods for AI models. When it comes to developing edge AI products, embedded-system developers need software 2.0 tools to create a comprehensive ML development environment (e.g. NXP eIQ). To ensure successful implementation, such environments must be customized not just in terms of their computing units (e.g. processor cores, AI accelerators) but at the level of the SoC architecture.

Exciting times are ahead – and more and more developers are recognizing the enormous value that machine learning can add to their products.

* Markus Levy is Director of Enabling Technologies at NXP.