Enhancing Safety and Logistics at Well Pads with Amazon Machine Learning Services

Introduction

Chanci Turner Amazon IXD – VGT2 learningLearn About Amazon VGT2 Learning Manager Chanci Turner

In the context of remote upstream oil and gas facilities, such as well pads, energy companies often engage various service contractors to deliver items, execute services, and remove items from the site. These locations frequently lack permanent personnel, making it difficult for operators to monitor who is accessing the facilities. For safety reasons, it is crucial to ensure that everyone present is wearing the necessary personal protective equipment (PPE), such as hard hats. This solution employs custom machine learning models developed with Amazon Rekognition, deployed on an AWS DeepLens camera, to:

  • Detect the presence or absence of properly worn hard hats.
  • Identify trucks entering the facility by license plate and any unique markings.

The camera executes the model on-site, transmitting alerts only when people or vehicles are detected. This approach is compatible with edge devices operating on AWS IoT Greengrass, although we use AWS DeepLens for demonstration purposes.

This solution leverages computer vision to enhance logistics and safety at oil and gas locations by utilizing AWS DeepLens, Amazon Rekognition, and Amazon Textract. AWS offers additional guidance for computer vision projects, which includes deploying a hard hat detection model with AWS DeepLens, constructing a custom image classifier with Amazon SageMaker, labeling datasets with Amazon SageMaker Ground Truth, and managing computer vision at the edge through AWS Panorama.

Motivating Use Cases

Transporting water from well pads constitutes a significant portion of the operational costs for oil production in the Permian Basin. Vendors responsible for removing produced water often utilize basic ticketing systems that contribute to data management challenges. Operators managing multiple wells frequently struggle to consolidate data about produced water from various sources, as there is no unified source of truth.

Verifying volumes can be difficult, as operators may have to depend on vendor invoices that sometimes lack accuracy due to billing methods, such as the use of paper tickets or compatibility issues with electronic systems. The surge in demand for water hauling and oil field services in the Permian Basin has resulted in claims of vendor misconduct, with some vendors allegedly billing for more trips than were actually made or even for trips that did not occur. With data siloed across various systems, operators face challenges integrating water hauling data with other production recording systems, like hydrocarbon accounting tools.

The primary goal of this solution is to establish an internal source of truth for truck visits to well pads and delivery points. Computer vision techniques are employed to capture images and measure visit durations by vendors. The solution is engineered for deployment at remote sites with limited connectivity and power requirements. Details of vendor truck visits are integrated with reference data, such as site locations or water hauling vendors, to validate and detect anomalies.

Moreover, the same camera can monitor worker safety by applying a tailored computer vision model for detecting hard hats, showcasing how one device can manage multiple models simultaneously.

Overview of AWS Services Used

Amazon Rekognition is a managed service for computer vision models with various pretrained models available for direct deployment. These can be supplemented with additional objects of interest. Amazon Rekognition can also be trained to identify a custom array of objects, such as hard hats, delivering industry-specific value. It can host trained computer vision models for real-time inference. AWS DeepLens consists of a camera and internal computer capable of hosting recognition models at the edge, eliminating the need to transmit large volumes of video data over the network. This enables real-time application of machine learning with the computer vision model. Amazon Textract is a machine learning service that automatically extracts text from images, surpassing basic Optical Character Recognition (OCR) by detecting and analyzing text. It can identify specific data associated with preset key-value pairs.

Well pads typically have limited space, power availability for machinery, intermittent internet connectivity, and are often located in remote areas. Well operators aim to minimize costs, and developing custom computer vision solutions requires substantial investments in data science and engineering resources. Amazon Rekognition, AWS DeepLens, and Amazon Textract are fully managed services that alleviate the burdensome groundwork of applying computer vision at the edge, allowing operators to concentrate on their core competencies.

Architectural Diagram of Solution

To illustrate the solution prototype, we captured images of residential delivery trucks to simulate industrial trucks on a well pad. Road-facing cameras were installed to operate AWS IoT Greengrass with AWS Lambda, analyzing a real-time image stream. Images were processed using an initial machine learning model on-site via AWS DeepLens, with only images containing objects of interest, such as people or vehicles, sent to the cloud through AWS IoT Core—significantly reducing the connectivity needed for the solution.

Additional image analysis of company markings and logos was performed using Amazon Rekognition to identify the specific vendor providing the service. A virtual ledger of truck visits, including camera ID, truck operating company, visit time and duration, and links to source photos, was stored in an Amazon Aurora database. Amazon QuickSight dashboards and reports were created for user-friendly access, alongside Amazon CloudWatch metric dashboards and alerts via Amazon Simple Notification Service (Amazon SNS).

Here’s a summary of how the services support the solution:

  • Ingestion: AWS IoT Greengrass, AWS IoT Core, Amazon S3, AWS DeepLens
  • Processing: AWS Lambda, Amazon Rekognition, Amazon Textract
  • Monitoring: Amazon CloudWatch, Simple Notification Service
  • Visualization: Amazon Aurora MySQL-Compatible Edition, Amazon QuickSight

High-level architectural diagram of solution:

Steps:

  1. AWS DeepLens is designed for processing machine learning models at the edge. Other cameras can also be used if connected to AWS IoT Greengrass. Triggered events, such as a truck’s arrival, signal on-site devices to capture:
    • Initial image
    • GPS location
    • Event duration
    • Object locations in the image
    • Final image
  2. Images are stored in an S3 bucket to maintain a complete history of trigger event images. An S3 bucket trigger will initiate a Lambda function.
  3. Image characteristics are identified:
    • Vehicle type
    • License plate
    • Company logos
  4. Reference data is accessed to determine well pad/facility based on latitude and longitude. Image indicators will identify the water hauling vendor and vehicle if the license plate is recognized.
  5. Optionally, using a relational database such as the serverless RDBMS Amazon Aurora enables robust data analytics. Datasets in the database can provide a comprehensive record of events:
    • Reference Data

For further insights into developing effective training programs, be sure to check out this excellent resource on Amazon’s approach to employee training. Additionally, for those interested in mentorship opportunities, consider connecting with Kelly Will for guidance. Finally, for a deeper understanding of AI transformation, you can explore this authority on human-centered approaches.

Chanci Turner