June 15, 2018

Machine Learning Meets Edge Computing with AWS Greengrass

Author : Tyler Keenan / Publisher: Business2Community

In previous posts, we've looked at how edge computing can move processing, storage, and networking resources closer to the places they'll be used and away from the massive centralized data centers that power the cloud. This might be fine for many (or even the vast majority) of cloud applications, this might work fine. But what if you need more than just processing power?

One of the limitations of edge computing frameworks up to now has been how to handle computationally intensive operations (the kind required for machine learning and AI, for example) on locally connected devices that might be far from the data center. Recently, though, Amazon Web Services (AWS) updated its Greengrass software to support local ML inference.

Quick Machine Learning Refresher

We've taken a look at machine learning before, but we'll go over a couple points again. The first is how machine learning models are trained. These are resource-intensive operations that require many hours and massive computing power and large data sets. That's why these operations are usually performed in the cloud or across large distributed computing systems running Spark or Hadoop.

The second is how those models are used once they've been trained. While an ML model still requires processing power, it's typically much less expensive to run, making it critical for many operations that require real-time inferences based on data the model hasn't seen before.

What sets AWS Greengrass apart is that it separates training the ML model from the ML inference. The former remains in the cloud, where it can take advantage of the scalability of the cloud's resources, while the latter is handled locally on connected devices. These devices only need intermittent cloud connectivity in order to get new versions of the model. The rest of the time, they're able to derive inferences from whatever data they encounter and pass that data along when connectivity is present.

ML, IoT, and Edge Computing

This division of labor is a natural fit for IoT settings, where systems may need real-time inferences without the ready availability of massive computing resources. Some potential applications of this technology include:

  • Industrial settings, where sensors can monitor activity levels (temperature, noise, etc.), detect anomalous behavior, and proactively schedule inspections or repairs in order to keep the factory running at peak operational efficiency.
  • Agriculture, where IoT sensors can monitor traditional or greenhouse crops, measuring temperature, moisture, acidity, and other factors to predict crop yields and help farmers adjust their farming practices in response to environmental changes.
  • Retail and entertainment settings, where cameras may be installed with object and facial recognition algorithms to monitor crowds and improve customer service.
  • Security and law enforcement, where facial recognition and scene analysis can be used to proactively detect and identify potential threats.

Additionally, AWS Greengrass promises to be flexible and powerful when it comes to both hardware and frameworks. You can upload your own pre-trained model to Amazon S3, but ML Inference also includes packages for TensorFlow Lite and Apache MXNet and can support other popular machine learning frameworks like Caffe2 and Microsoft Cognitive Toolkit. In terms of hardware, ML Inference can run on devices powered by Raspberry Pi, Intel Atom, or Nvidia Jetson TX2 and also gives you access to a device's GPU, which is really useful for certain applications (like cryptography and mining digital currencies).

This article originally appeared in Upwork.

This article was written by Tyler Keenan from Business2Community and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.