EmbeddedRelated.com
Blogs

AI at the Edge - Can I run a neural network in a resource-constrained device?

Stephen MartinMarch 11, 20192 comments

Hello Related Communities,

This is my first time blogging since joining Stephane in November. He and I were at Embedded World together and he asked me to write about some of the important trends as they relate to all of you. I expect to post others in the near future, but the biggest trend in the embedded space was all of the activity around artificial intelligence (AI) at the edge. 

This trend caught me a bit by surprise. I have been doing a lot of reading about AI over the last 18 months, and most of what I read led me to believe that AI required large amounts of processing power and massive databases to train these machines. These machines used Graphics Processing Units (GPUs) or other specialized hardware to support deep learning and neural networks. 

This article is available in PDF format for easy printing

What I learned at Embedded World is that AI is possible at the edge, even using Arm Cortex-M4 and M7 microcontrollers.

Consider this post to be an introduction to the topic of AI at the edge, and more specifically the resource-constrained edge.

First, some basic definitions:

AI. Artificial intelligence is the big bucket of all things related to computers that can learn, adapt and make decisions based on data analysis. Fundamentally, when we talk about AI today, it is not about Skynet (Terminator reference) but about machines that can be trained using known datasets to make predictions about new data. Some common examples are object recognition (is the object a cat?), text recognition, handwriting recognition, and speech recognition.

Machine learning. This subset of the AI world refers to the process of teaching machines to accurately interpret data so that they can make decisions.

Neural networks. These are the systems/models used by the computing device to analyze, sort and classify the incoming data.

Inference. Currently resource-constrained edge devices are limited to using inference to make predictions using the incoming data. The machine teaching/learning is done on the desktop or in the cloud.

Why AI at the edge?

Much of the AI we see today, as consumers, makes use of the cloud. Think about Amazon Alexa or your Google assistant. These devices listen for the code word that will wake them up. All of the voice recognition after that point relies on a network connection to cloud servers. 

There are a number of reasons to move processing from the cloud out to the edge. These include power consumption, latency, and data security. As edge devices become more capable, more and more processing can be handled locally. This is also true with AI functionality.

What is possible at the resource-constrained edge?

From what I saw at Embedded World it is possible to do basic object detection, text and handwriting recognition on a Cortex-M microcontroller. Here are some of the vendors that demonstrated AI solutions for the edge at Embedded World. 

  • STMicroelectronics showcased STM32Cube.AI, an extension pack of the widely used STM32CubeMX configuration and code generation tool enabling AI on STM32 Arm Cortex-M-based microcontrollers. They claim you can use this tool to run multiple neural networks on a single STM32 MCU.
  • Arm has developed Arm NN, that allows you to take models developed in TensorFlow, Caffe, etc. and run an efficient neural network in a Cortex-M device.
  • Amazon AWS offers Greengrass that seamlessly extends AWS to edge devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage.
  • Beagleboard announced BeagleBone® AI, a new board developed specifically for AI. They will offer this at about $100. Note: this is not a microcontroller but a more powerful MPU -- powered by Texas Instruments AM5729 SoC equipped with TI C66x DSP cores and embedded-vision-engine cores supported through a Texas Instruments Deep Learning machine learning OpenCL API.

Where do I start?

For a good introduction to what is possible with a resource-constrained edge device, check out Jacob Beningos' talk that we live-streamed from Embedded World. Jacob does a great job of introducing the topic of AI at the edge and then shares specific AI examples he ran in his lab along with data on the processing requirements for these examples.


Are you interested in more posts on AI at the edge?

We want to hear from you. Was this post helpful? Do you want more blog posts on this topics from experts in the field? Are you someone who has experienced developing AI for edge devices and would like to share your experiences with our community? Please use the comments section below to respond or reach out to Stephane.



[ - ]
Comment by smitjsAugust 17, 2019

Thank you. My field of interest is image recognition, specifically animal species.

More blog posts on this topics from experts in the field please.

Best Regards

Johan Smit

smitjs3808@gmail.com

[ - ]
Comment by seanffbnnSeptember 3, 2019

You might like to try this type of neural network:

https://github.com/S6Regen/Fixed-Filter-Bank-Neural-Networks

The cost per layer is only O(nlog(n)).  Much lower than the O(n squared) cost for a conventional artificial neural network.  

 

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: