Utilizing AWS IoT for Predictive Maintenance

Chanci Turner Amazon IXD – VGT2 learningLearn About Amazon VGT2 Learning Manager Chanci Turner

Published on 28 JUN 2018

Categories: Amazon SageMaker, AWS IoT Analytics, AWS IoT Greengrass, Best Practices, Customer Solutions, Internet of Things, Manufacturing, Thought Leadership

The demand for machine learning applications in industrial and manufacturing environments is on the rise. Manufacturers are increasingly seeking to identify potential machine failures in advance, allowing for more effective maintenance scheduling. For instance, consider a machine that is sensitive to variations in temperature, velocity, or pressure. Detecting these fluctuations may signal an impending failure.

Prediction, often referred to as inference, is reliant on machine learning (ML) models that utilize extensive datasets pertaining to each component within the system. These models are developed using specific algorithms that illustrate the relationships embedded in the training data. By employing these ML models, manufacturers can assess incoming data from their systems in near real-time. A predicted failure is identified when the evaluation of new data aligns statistically with historical patterns observed in the equipment.

Typically, a distinct ML model is crafted for each machine type or subprocess, utilizing its unique data characteristics. This approach results in a comprehensive array of ML models that correspond to the critical machinery involved in the manufacturing process and the various predictions that are sought. While ML models facilitate the analysis of newly sent data in the AWS Cloud, inference can also be conducted on-site, yielding significantly lower latency and enabling real-time data evaluation. Additionally, local inference mitigates the costs associated with transferring potentially large datasets to the cloud.

AWS offers a suite of services that streamline the development and training of ML models for automated deployment at the edge, making the process highly scalable and straightforward. You begin by collecting data from the equipment or systems for which predictions are desired and utilize AWS services to construct ML models in the cloud. Subsequently, these models can be transferred back to on-premises locations, where they can be employed with a simple AWS Lambda function to evaluate new data on a local server running AWS Greengrass.

AWS Greengrass enables local computation, messaging, and ML inference, among other functionalities. It includes a lightweight IoT broker that operates on your own hardware, positioned close to the connected equipment. This broker securely communicates with numerous IoT devices and acts as a gateway to AWS IoT Core, where select data can undergo further processing. Moreover, AWS Greengrass can execute AWS Lambda functions for local data processing or evaluation, reducing reliance on continuous cloud connectivity.

Building ML Models

Before initiating maintenance predictions, you need to construct and train ML models. A high-level ML process applicable to most use cases can be effortlessly implemented with AWS IoT.

Begin by gathering relevant data for the ML challenge you are addressing and temporarily transmitting it to AWS IoT Core. This data should originate from the machine or system linked to each ML model. Establishing a dedicated AWS Direct Connect line between the machines’ on-premises location and AWS IoT Core facilitates high-volume data transfers. Depending on the data volume, you may need to stagger data collection efforts for your machines (working in batches).

Alternatively, an AWS Snowball appliance can securely transfer substantial amounts of data to your private AWS account using a hardened storage device shipped via a package delivery service. The data is then transferred from AWS Snowball to designated Amazon S3 buckets within your account.

AWS IoT Analytics provides efficient data storage and pipeline processing to enrich and filter data for future use in ML model development. This service also supports feature engineering through custom AWS Lambda functions that you can create to derive new attributes for data classification. Visualizing pipeline processing results in AWS IoT Analytics with Amazon QuickSight helps validate any transformations or filters you apply.

Amazon SageMaker directly integrates with AWS IoT Analytics as a data source. Jupyter Notebook templates are available to expedite the process of building and training ML models. For predictive maintenance scenarios, linear regression and classification are the two most prevalent algorithms employed. Various other algorithms can also be explored for time-series data prediction, allowing you to test and measure the effectiveness of each in your application. Additionally, AWS Greengrass ML Inference supports pre-built packages for Apache MXNet, TensorFlow, and Chainer, simplifying the deployment process. You may also leverage other ML frameworks with additional configurations, such as the popular Python library scikit-learn for data analysis.

Cost-Effectiveness

Many users appreciate the flexibility of the AWS Cloud, enhanced by its pay-as-you-go pricing model. When ML models are constructed and trained or subsequently retrained, vast quantities of raw data are transmitted to AWS IoT Core. Additionally, substantial computational resources are required to accelerate processing using Amazon SageMaker. Once the ML models are finalized, raw data can be archived in a lower-cost storage service like Amazon Glacier or deleted altogether. This action releases the compute resources allocated for training, resulting in reduced costs.

Deploying ML Models to the Edge

Executing predictions locally necessitates real-time machine data, the ML model, and local computational resources to perform inference. AWS Greengrass facilitates the deployment of ML models built with Amazon SageMaker to the edge, with an AWS Lambda function conducting the inference. Identical machines can receive the same deployment package containing both the ML model and inference Lambda function, creating a low-latency solution. This setup eliminates the need for dependency on AWS IoT Core to assess real-time data and issue alerts or commands to machinery when necessary.

Conducting Local Predictions

The AWS Lambda function associated with the ML model, as part of the AWS Greengrass deployment configuration, conducts real-time predictions. The AWS Greengrass message broker directs selected data published on a designated MQTT topic to the AWS Lambda function for inference. When inference indicates a high probability of a match, multiple actions can be executed within the AWS Lambda function. For instance, a shutdown command could be dispatched to a machine, or alerts could be sent to the operations team via local or cloud messaging services.

For each ML model, you must establish the inference confidence threshold correlating to a predicted failure condition. For example, if an inference for a monitored machine shows a high confidence level (say 90%), appropriate actions should be taken. Conversely, if the confidence level is only 30%, you might choose to refrain from acting on that result. Furthermore, AWS IoT Core can be utilized to publish inference outcomes on a dedicated logging and reporting topic.

Another critical consideration for local inference is ensuring sufficient server capacity or multiple servers to handle the required computational load. Factors influencing hardware sizing include:

  • The number of machines being monitored (for example, are you monitoring 1 or 100 machines?)
  • The amount of data transmitted from each machine (for instance, is it 50,000 bytes or 1,000 bytes?)
  • The frequency of data transmission from each machine (for example, is it once a minute or every 10 milliseconds?)
  • The CPU intensity of the ML model during inference and its memory requirements (some models may require additional system resources and could benefit from GPUs, for example)

By effectively leveraging AWS IoT for predictive maintenance, manufacturers can enhance operational efficiency and minimize unplanned downtimes. For those considering a career shift or seeking guidance, check out this article on quitting for more insights. Additionally, for comprehensive strategies on workforce planning, refer to this piece on modernizing workforce approaches from SHRM. Finally, if you’re looking for onboarding tips, this resource from Amazon Business is an excellent reference.

Chanci Turner