aws-lambda-power-tuning
autocannon
Our great sponsors
aws-lambda-power-tuning | autocannon | |
---|---|---|
36 | 14 | |
5,145 | 7,574 | |
- | - | |
8.7 | 6.5 | |
4 days ago | 9 days ago | |
JavaScript | JavaScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-lambda-power-tuning
-
Optimizing Costs in the Cloud: Embracing a FinOps Mindset
Sometimes, changing services, like opting for HTTP over REST API Gateway, leveraging tools like Lambda Powertuning to optimize functions, or reducing a CloudWatch log retention and changing log level, can lead to significant savings.
-
AWS SnapStart - Part 13 Measuring warm starts with Java 21 using different Lambda memory settings
In case of not enabling SnapStart for the Lambda function we observed that increasing memory reduces the warm execution time for our use case especially for p>90. As adding more memory to the Lambda function is also a cost factor, the sweet spot between cold and warm start time and cost is somewhere between 768 and 1204 MB memory setting for the Lambda function for our use case. You can use AWS Lambda Power Tuning for very nice visualisations.
-
How to enhance your Lambda function performance with memory configuration?
The aws lambda power tuning tool helps optimise the Lambda performance and cost in a data-driven manner. Let's try it out:
-
Controlling Cloud Costs: Strategies for keeping on top of your AWS cloud spend
For Lambda, a very useful tool to help optimise is the AWS Lambda Power Tuning tool, released by Alex Casalboni, Developer Advocate at AWS: https://github.com/alexcasalboni/aws-lambda-power-tuning
-
Best way to decrease latency (API <-> Lambda <-> Dynamodb)
Lambda memory affects not only the CPU performance and and host execution priority, but also network performance. Be wary though as the price scales linearly. You can use a tool like Lambda Power Tuning to find the sweet spot for your application. https://github.com/alexcasalboni/aws-lambda-power-tuning
-
How to optimize your lambda functions with AWS Lambda power tuning
This tool, which is open source and available here, takes the form of a Step Function that is deployed on your AWS account. The purpose of this Step Function is to run your lambda with different memory configurations several times and output a comparison in the form of a graph (or JSON) to try to find the optimal balance between cost and execution time. There are three possible optimization modes: cost, execution time, or a "balanced" mode where it tries to find a balance between the two.
-
Developers Journey to AWS Lambda
The AWS Documentation's Memory and Computing Power page is a good starting point. To avoid configuring it manually, it's worth checking out AWS Lambda Power Tuning, which will help you find the sweet spot.
-
Guide to Serverless & Lambda Testing β Part 2 β Testing Pyramid
Utilizing tools such as AWS X-Ray, AWS Lambda Power Tuning, and AWS Lambda Powertools tracer utility is recommended. Read more about it here.
-
Tunea tus funciones Lambda
Install the AWS SAM CLI in your local environment. Configure your AWS credentials (requires AWS CLI installed): $ aws configure Clone this git repository: $ git clone https://github.com/alexcasalboni/aws-lambda-power-tuning.git Build the Lambda layer and any other dependencies (Docker is required): $ cd ./aws-lambda-power-tuning $ sam build -u sam build -u will run SAM build using a Docker container image that provides an environment similar to that which your function would run in. SAM build in-turn looks at your AWS SAM template file for information about Lambda functions and layers in this project. Once the build has completed you should see output that states Build Succeeded. If not there will be error messages providing guidance on what went wrong. Deploy the application using the SAM deploy "guided" mode: $ sam deploy -g
-
AWS Serverless Production Readiness Checklist
Use AWS Lambda Power Tuning to balance cost and performance.
autocannon
-
Optimize Your Node.js API with Clustering, Load Testing, and Advanced Caching
Autocannon GitHub Repository
-
Taming the dragon: using llnode to debug your Node.js application
To make things interesting, letβs send some requests to this server with autocannon:
-
Benchmarking Deno vs Node with GraphQL
Using autocannon, I did the following script to simulate 500 concurrent connections over 30 seconds:
-
A first look at Bun: is it really 3x faster than Node.js and Deno?
We then used autocannon to measure the throughput (requests per second) of each runtime server-rendering our React app.
-
Can we use Pydantic models (Basemodel) directly inside model.predict using FastAPI, if not why?
You could also use tools like autocannon to see how many requests/second you can achieve with various methods. : https://github.com/mcollina/autocannon
-
How to Use Source Maps in TypeScript Lambda Functions (with Benchmarks)
I used autocannon to test the function at 100 concurrent executions for 30 seconds. I also used Lambda Power Tuning to find the ideal memory configuration, which proved to be 512MB. All the results are available.
-
Find bottlenecks in Node.js apps with Clinic Flame
Moreover, if your blocking issue is appearing only on heavy load, you can easily test it using the very nice --autocannon CLI param (see it with clinic flame --help) where you can specificy autocannon options to generate some HTTP load on your web service.
-
Created a URL shortener in Node (Fastify) and in Go (net/http). Why isn't Go faster?
I packaged them both with Docker and deployed them to an EC2 instance, each behind an Nginx reverse proxy I setup in docker-compose. I'm currently testing performance using autocannon from my laptop like this: `autocannon -a 5000 -w 10 URL` (5000 requests with 10 workers), and both apps complete in around 40 seconds. The EC2 instance is in Oregon and I'm testing from Toronto.
-
DB query performance options.
You can test it by yourself using console.time(). You can use autocannon to stress-test your http server to see what is really the best options.
-
Experiments in concurrency 3: Event loops
When I test this with autocannon making three simultaneous requests (autocannon --connections 3 --amount 3 --timeout 10000 --no-progress http://localhost:5678/):
What are some alternatives?
json-schema-to-ts - Infer TS types from JSON schemas π
node-clinic - Clinic.js diagnoses your Node.js performance issues
dynamodb-toolbox - A simple set of tools for working with Amazon DynamoDB and the DocumentClient
octane - Supercharge your Laravel application's performance.
middy - π΅ The stylish Node.js middleware engine for AWS Lambda π΅
serverless-graphql - Serverless GraphQL Examples for AWS AppSync and Apollo
aws-sam-cli - CLI tool to build, test, debug, and deploy Serverless applications using AWS SAM
aws-graviton-getting-started - Helping developers to use AWS Graviton2 and Graviton3 processors which power the 6th and 7th generation of Amazon EC2 instances (C6g[d], M6g[d], R6g[d], T4g, X2gd, C6gn, I4g, Im4gn, Is4gen, G5g, C7g[d][n], M7g[d], R7g[d]).
lambda-sourcemaps
failure-lambda - Module for fault injection into AWS Lambda
Swoole - π Coroutine-based concurrency library for PHP