Introduction
Learn About Amazon VGT2 Learning Manager Chanci Turner
The emergence of Free Ad-supported Streaming Television (FAST) channels has facilitated the repurposing and distribution of archival content, encompassing classic films and TV shows, on contemporary platforms and devices. Much of this content is only available in lower-resolution, standard definition (SD) formats, necessitating enhancement to meet audience expectations. Traditionally, simple methods like Lanczos and bicubic upscaling have been employed, but these often result in image artifacts such as blurring and pixelation.
Deep learning techniques, such as Super-Resolution Convolutional Neural Network (SRCNN) and Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR), have demonstrated impressive results in objective quality assessments, including VMAF, SSIM, and PSNR. However, these methods can be computationally intensive, which can be a challenge for channels with limited budgets, especially for FAST offerings. To address this, Amazon Web Services (AWS) and Intel present a cost-effective solution for video super-resolution. This approach utilizes AWS Batch to process video assets via the Intel Library for Video Super Resolution (VSR), striking a balance between quality and performance tailored for real-world applications. In this blog, we provide a step-by-step guide using an AWS CloudFormation template.
Solution
Implementing the Intel Library for Video Super Resolution, which is based on the enhanced RAISR algorithm, requires specific Amazon EC2 instance types, including c5.2xlarge, c6i.2xlarge, and c7i.2xlarge. We utilize AWS Batch to compute jobs and automate the entire pipeline, eliminating the need to manage the underlying infrastructure, including initiating and halting instances.
The primary components of the solution include:
- Creating a Compute Environment in AWS Batch that defines CPU requirements and the types of EC2 instances that will be permitted.
- Establishing a Job Queue linked to the appropriate compute environment. Each job submitted to this queue will run on the specified EC2 instances.
- Defining the Job. At this stage, it’s essential to have a container registered in the Amazon Elastic Container Registry (Amazon ECR). Detailed instructions for building the Docker image are available in this GitHub link. The container is configured to include the Intel Library for VSR, the open-source FFmpeg tool, and the AWS Command Line Interface (AWS CLI) for API calls to S3 buckets. Once the job definition is complete (with the image registered in Amazon ECR), jobs can start being submitted to the queue.
The diagram below illustrates the overall architecture as described:
Implementation
A CloudFormation template can be found in this GitHub. The following steps outline how to deploy the proposed solution:
- Download the YAML file from the GitHub repository.
- Navigate to CloudFormation from the AWS Console to create a new stack using template.yml.
This template allows for defining several parameters:
- Memory: This specifies the memory associated with the job definition. For example, for a 1080p, AVC 30fps video with a duration of 15:00, 4000 memory and 4vCPU may be required.
- Subnet: AWS Batch will deploy the appropriate EC2 instance types (c5.2xlarge, c6i.2xlarge, and c7i.2xlarge) within a chosen customer subnet that has Internet access.
- VPCName: This refers to the existing virtual private cloud (VPC) associated with the selected subnet.
- VSRImage: This field typically utilizes an existing public image, but users can create their own image and insert the URL here. Instructions for creating a custom image can be found here.
- VCPU: This represents the virtual CPU (VCPU) allocated to the job definition, which can also be adjusted.
The next step is to create a CloudFormation stack using the defined parameters.
Once the stack is successfully created, two new Amazon S3 buckets, named vsr-input and vsr-output, should appear. Upload an SD file to the vsr-input-xxxx-{region-name} bucket.
Next, access the Batch section from the AWS console, where you can confirm that a new queue (queue-vsr) and compute environment (VideoSuperResolution) have been established.
In the Batch dashboard, select Jobs from the menu, click on Submit a new job, and then choose the appropriate job definition (vsr-jobDefiniton-xxxx) and queue (queue-vsr). On the subsequent screen, click Load from job definition and modify the names of the input and output files. For instance, if a user uploads a file named input-low-resolution.ts, they may want to designate the output file as output-high-resolution.ts. To implement this, a proper array of Linux commands should be added in the next interface.
After reviewing and submitting the job, wait for the Status to transition from Submitted to Runnable, and then to Succeeded. The AWS console will display additional details, such as the number of job attempts and other relevant information.
To validate that the super-resolution file has been created, navigate to the output Amazon S3 bucket where it should have been uploaded automatically.
Comparing Quality
To conduct a subjective quality assessment between original and super-resolution videos, the open-source tool compare-video can be utilized. Additionally, an objective evaluation can be performed using VMAF. During this evaluation, VMAF employs traditional upscaling methods such as Lanczos or Bicubic to align both resolutions prior to executing a frame-by-frame comparison.
Conclusion
To clean up and remove the example stack created during this solution, visit the AWS console and delete the stack as necessary. For more insights on optimizing your workspace, consider reading this engaging post on setting up your workspace. For further information on unlocking potential in your team, check out this authoritative source from SHRM. Also, for excellent resources on workplace safety and training at Amazon, visit this link.