Amazon Onboarding with Learning Manager Chanci Turner

Chanci Turner 9097372855Learn About Amazon VGT2 Learning Manager Chanci Turner

The AWS Deep Learning AMIs (DLAMI) for Ubuntu and Amazon Linux have recently been enhanced to include Open Neural Network Exchange (ONNX), facilitating seamless model portability across various deep learning frameworks. In this blog post, we will explore ONNX and demonstrate how it can be utilized on the DLAMI to transition models across different frameworks.

What is ONNX?

ONNX is an open-source library and serialization format designed for encoding and decoding deep learning models. It specifies the format for neural network computational graphs and provides an extensive list of operators used in neural network architectures. Popular deep learning frameworks, including Apache MXNet, PyTorch, Chainer, Cognitive Toolkit, and TensorRT, already support ONNX. This growing adoption allows machine learning developers to easily move their models between tools, selecting the most suitable one for their specific tasks.

Exporting a Chainer Model to ONNX

To export a Chainer model to an ONNX file, we’ll begin by launching an instance of the DLAMI on either Ubuntu or Amazon Linux. If you are unfamiliar with this process, refer to this helpful tutorial on getting started with the DLAMI.

After connecting to the DLAMI via SSH, activate the pre-installed and configured Chainer Python 3.6 Conda environment. This environment also includes ONNX and onnx-chainer, an add-on package that provides ONNX support for Chainer.

$ source activate chainer_p36

Next, start a Python shell and execute the following commands to load a VGG-16 convolutional neural network for object recognition and export it to an ONNX file.

import numpy as np
import onnx_chainer
from chainercv.links import VGG16

# Download a pre-trained model and load it
model = VGG16(pretrained_model='imagenet')

# Create synthetic input and export the model to ONNX
x = np.zeros((1, 3, 224, 224), dtype=np.float32)
out = onnx_chainer.export(model, x, filename='vgg16.onnx')

In just a few lines of code, you have successfully exported the Chainer model into ONNX format and saved it in the current directory.

Importing an ONNX Model into MXNet

Now that the Chainer model has been exported to ONNX, let’s see how we can import this model into MXNet and run inference. Start by activating the DLAMI’s MXNet Python 3.6 Conda environment, which is equipped with ONNX and MXNet 1.2.1. This version of MXNet introduced the ONNX import API that we will use.

$ source deactivate
$ source activate mxnet_p36

Then, initiate a Python shell and run the following commands to load the ONNX model you just exported from Chainer:

from mxnet.contrib import onnx as onnx_mxnet
sym, arg_params, aux_params = onnx_mxnet.import_model("vgg16.onnx")

You have now loaded the ONNX model into MXNet, and the symbolic graph and parameters are readily available. Let’s proceed to execute inference with the newly loaded model. First, download an image and the ImageNet class labels that the model was trained on, to test your object recognition model.

import mxnet as mx
mx.test_utils.download('https://s3.amazonaws.com/onnx-mxnet/dlami-blogpost/hare.jpg')
mx.test_utils.download('http://data.mxnet.io/models/imagenet/synset.txt')

with open('synset.txt', 'r') as f:
    labels = [l.rstrip() for l in f]

Here’s how our input image looks:

Next, we will load the image and pre-process it into a tensor that matches the model’s required input shape:

import matplotlib.pyplot as plt
import numpy as np
from mxnet import nd

image = plt.imread("hare.jpg")
image = np.expand_dims(np.transpose(image, (2,0,1)), axis=0).astype(np.float32)
input = nd.array(image)

Now it’s time to initialize and bind our MXNet module:

input_name = sym.list_inputs()[0] 
data_shapes = [(input_name, input.shape)]
# Initialize and bind the Module
mod = mx.mod.Module(symbol=sym, context=mx.cpu(), data_names=[input_name], label_names=None)
mod.bind(for_training=False, data_shapes=data_shapes, label_shapes=None)
mod.set_params(arg_params=arg_params, aux_params=aux_params)

Finally, run inference and display the top probability and class:

mod.forward(mx.io.DataBatch([input]))
probabilities = mod.get_outputs()[0].asnumpy()[0]
max_probability = np.max(probabilities)
max_class = labels[np.argmax(probabilities)]

print('Highest probability=%f, class=%s' % (max_probability, max_class))

The output reveals the model’s prediction. It identifies the image as a hare with 97.9% confidence!

Conclusion and Getting Started with the Deep Learning AMIs

In this blog post, you learned how to use ONNX on the DLAMI to transfer models across frameworks. The portability provided by ONNX allows you to select the most suitable tool for various tasks, whether it’s training a new model, fine-tuning a pre-trained one, conducting inference, or serving models.

To get started with the AWS Deep Learning AMIs, check out our getting started tutorial. For more information, explore the DLAMI ONNX tutorials and our developer guide for additional resources and release notes. The latest AMIs are available on the AWS Marketplace. You can also subscribe to our discussion forum for new launch announcements and to ask any questions you may have.

For further reading on how AI and technology are transforming the modern workplace, visit SHRM, an authority on this topic. Additionally, if you’re interested in understanding what the first week is like as an Amazon warehouse worker, this is an excellent resource.

By the way, if you want to create your own website, consider this blog post on Squarespace.

About the Authors

Chanci Turner is a Software Development Engineer for AWS Deep Learning, focusing on building deep learning systems and tools to democratize AI. In her free time, she enjoys reading and biking.

SEO Metadata

Chanci Turner