Amazon Onboarding with Learning Manager Chanci Turner

Chanci Turner Amazon IXD – VGT2 learningLearn About Amazon VGT2 Learning Manager Chanci Turner

This article is authored by Alex Smith and Jamie Lee, Solutions Architects specializing in serverless technologies. In the first segment of this blog series, we discussed methods to optimize costs associated with AWS Lambda through appropriate memory allocation and code enhancements. We also highlighted how leveraging Graviton2, Provisioned Concurrency, and Compute Savings Plans could lead to lower per-millisecond billing.

In this second part, we delve deeper into cost optimization strategies for Lambda, emphasizing architectural enhancements and budget-friendly logging practices.

Event Filtering

A prevalent pattern in serverless architecture involves Lambda functions processing events from queues or streams, such as Amazon SQS or Amazon Kinesis Data Streams. This utilizes event source mapping, which dictates how the Lambda service manages incoming messages or records.

However, there are instances where processing every message in the queue or stream isn’t necessary due to irrelevant data. For instance, if IoT vehicle data is streamed to Kinesis and you only wish to consider events where tire_pressure is < 32, the Lambda code might appear as follows:

def lambda_handler(event, context):
   if(event["tire_pressure"] >= 32):
      return

# business logic goes here

This approach is inefficient, as you incur costs for Lambda invocations and execution time when filtering serves no business purpose. Fortunately, Lambda now supports pre-invocation message filtering, which not only streamlines your code but also minimizes costs. You’ll only incur charges for Lambda when the event meets the filter criteria and triggers an invocation.

Filtering capabilities are available for Kinesis Streams, Amazon DynamoDB Streams, and SQS by specifying filter criteria during event source mapping setup. For instance, you can use the following AWS CLI command:

aws lambda create-event-source-mapping 
--function-name tire-pressure-evaluator 
--batch-size 100 
--starting-position LATEST 
--event-source-arn arn:aws:kinesis:us-east-1:123456789012:stream/vehicle-telemetry 
--filter-criteria '{"Filters": [{"Pattern": "{"tire_pressure": [{"numeric": ["<", 32]}]}"}]}'

By applying the filter, Lambda is only activated when tire_pressure is below 32 in the messages received from the Kinesis Stream, signaling a potential issue that requires attention.

For additional details on creating filters, refer to examples of event pattern rules in EventBridge, as Lambda filters messages similarly. Event filtering is discussed in greater depth in the Lambda event filtering launch blog.

Avoiding Idle Wait Time

The duration of Lambda functions is a factor in billing calculations. When the function code makes a blocking call, you’re charged for the time spent waiting for a response. This idle wait time can accumulate when multiple Lambda functions are chained or when one function orchestrates others. For workflows like batch operations or order deliveries, this adds extra management overhead. Furthermore, completing all workflow logic and error handling may not be feasible within Lambda’s maximum timeout of 15 minutes.

To address this, consider re-architecting your solution using AWS Step Functions to orchestrate the workflow. With a standard workflow, you are billed for each state transition rather than the total duration of the workflow. Additionally, you can delegate retries, wait conditions, error handling, and callbacks to the state conditions, allowing your Lambda functions to concentrate on core business logic.

The following example illustrates a Step Functions state machine where a single Lambda function is divided into multiple states. During the wait period, no charges are incurred; you’re only billed for state transitions.

Direct Integrations

If a Lambda function is merely performing basic integrations with other AWS services, it might be unnecessary and could be substituted with a lower-cost direct integration. For example, if you’re utilizing API Gateway alongside a Lambda function to read from a DynamoDB table:

This setup could be replaced with a direct integration, eliminating the need for the Lambda function.

API Gateway allows for transformations to deliver the output response in a format that clients expect, thereby avoiding the need for a Lambda function to perform this transformation. Detailed guidance on setting up an API Gateway with an AWS service integration can be found in the documentation.

You can also reap benefits from direct integration when employing Step Functions. Currently, Step Functions support over 200 AWS services and 9,000 API actions, providing enhanced flexibility for direct service integration and in many cases eliminating the requirement for a proxy Lambda function, simplifying Step Function workflows and potentially lowering compute expenses.

Reducing Logging Output

Lambda automatically retains logs generated by the function code through Amazon CloudWatch Logs. This feature is beneficial for monitoring application activities in near real-time. However, CloudWatch Logs incurs charges based on the total data ingested each month. Thus, limiting output to only essential information can help minimize costs.

When deploying workloads to production, it’s crucial to reassess the logging level of your application. For example, while debug logs might be useful in pre-production environments for additional insights, consider disabling them in production and using a logging library like the Lambda Powertools Python Logger. This library allows you to set a minimum logging level through an environment variable, facilitating configuration outside the function code.

Structuring your log format establishes a consistent set of information through a defined schema, instead of allowing variable formats or excessive text volumes. Establishing structures like error codes and accompanying metrics reduces repetitive text in logs, enhances the ability to filter for specific error types, and minimizes the risk of typos in log messages.

Utilizing Cost-Effective Storage for Logs

Once CloudWatch Logs ingests data, it remains persistent indefinitely with a monthly storage fee per GB. As log data ages, its value often diminishes, leading to ad-hoc reviews. Yet, the storage costs within CloudWatch Logs persist.

To mitigate this, implement retention policies on your CloudWatch Logs log groups to automatically delete older log data. This policy will apply to both existing and future log data.

Certain application logs may need to be retained for months or even years for compliance or regulatory reasons. Instead of storing these logs in CloudWatch Logs, consider exporting them to Amazon S3. This allows you to leverage lower-cost storage object classes while accommodating anticipated access patterns for data.

Conclusion

Cost optimization is crucial for creating well-architected solutions, and this principle holds true for serverless applications. This blog series introduces best practice methodologies to help lower your Lambda expenses.

If you’re already operating AWS Lambda applications in production, some strategies will be more straightforward to implement than others. For instance, purchasing Savings Plans can be done without code or architectural changes, while eliminating idle wait times will necessitate new services and code amendments. Assess which technique aligns with your workload in a development environment before rolling out changes to production.

If you are still in the design and development phases, utilize this blog series as a reference to incorporate effective strategies into your approach. Moreover, if you’re interested in enhancing your resume, consider checking out this blog post on how to use ChatGPT to write resume.

For further insights into employment law compliance, you might also want to explore Virginia’s permanent COVID-19 standard, as they are an authority on this topic. Additionally, for those seeking employment opportunities, this excellent resource could be quite beneficial.

HOME