AWS Lambda Cold Start Times

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • piku

    The tiniest PaaS you've ever seen. Piku allows you to do git push deployments to your own servers.

  • I recently discovered that uWSGI has a "cheap mode" that will hold the socket open but only actually spawn workers when a connection comes in (and kill them automatically after a timeout without any requests).

    Pertinent options: https://github.com/piku/piku/blob/master/piku.py#L908

    If you already have 24/7 compute instances going and can spare the CPU/RAM headroom, you can co-host your "lambdas" there, and make them even cheaper :)

  • aws-lambda-java-libs

    Official mirror for interface definitions and helper classes for Java code running on the AWS Lambda platform.

  • > I feel this is a gross misrepresentation of AWS Lambdas.

    AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code as a ZIP file or container image, and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic. You can set up your code to automatically trigger from over 200 AWS services and SaaS applications or call it directly from any web or mobile app. You can write Lambda functions in your favorite language (Node.js, Python, Go, Java, and more) and use both serverless and container tools, such as AWS SAM or Docker CLI, to build, test, and deploy your functions.

    https://aws.amazon.com/lambda/

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • flyctl

    Command line tools for fly.io services

  • Have your runWarm sleep for 500ms and execute 50 of them concurrently. As long as none of the functions are finished and you start a new one you get a new instance, at least that's what I think.

    You can get 50 hot instances that way no?

    I'd rather scale per connections. Have a lambda instance do 50 concurrent requests. Something like https://fly.io but cheaper.

  • runtimelab

    This repo is for experimentation and exploring new ideas that may or may not make it into the main dotnet/runtime repo.

  • > “.Net has almost the same performance as Golang and Rust, but only after 1k iterations(after JIT).”

    Additions like async/await and nullable reference types make it easier to write bug-free code, which for a lot of folks is a better trade off than “speaking to the hardware directly”.

    .NET also runs natively on a bunch of platforms now, including ARM.

    I’d call all of that continuous improvement. Perhaps even reinvention?

    [1] https://docs.microsoft.com/en-us/dotnet/core/deploying/ready...

    [2] https://github.com/dotnet/runtimelab/tree/feature/NativeAOT

    [3] https://www.techempower.com/benchmarks/#section=test&runid=5...

  • FrameworkBenchmarks

    Source for the TechEmpower Framework Benchmarks project

  • > “.Net has almost the same performance as Golang and Rust, but only after 1k iterations(after JIT).”

    Additions like async/await and nullable reference types make it easier to write bug-free code, which for a lot of folks is a better trade off than “speaking to the hardware directly”.

    .NET also runs natively on a bunch of platforms now, including ARM.

    I’d call all of that continuous improvement. Perhaps even reinvention?

    [1] https://docs.microsoft.com/en-us/dotnet/core/deploying/ready...

    [2] https://github.com/dotnet/runtimelab/tree/feature/NativeAOT

    [3] https://www.techempower.com/benchmarks/#section=test&runid=5...

  • containers-roadmap

    This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).

  • The big issue with ECS+Fargate is the lack of CPU bursting capability. This means that if you want to run a small service that doesn't consume much, you have two options:

    1. Use a 0.25cpu + 0.5gb ram configuration and accept that your responses are now 4 times slower because the 25% time is strictly enforced.

    2. Use a 1cpu + 2gb ram (costing 4 times more) even though it is very under-utilized.

    AWS is definitely in no rush to fix this, as they keep saying they are aware of the issue and "thinking about it". No commitment or solution on sight though:

    https://github.com/aws/containers-roadmap/issues/163

  • aws-lambda-runtimes-performance

    AWS Lambda Performance comparison

  • > NodeJs is the slowest runtime, after some time it becomes better(JIT?) but still is not good enough. In addition, we see the NodeJS has the worst maximum duration.

    The conclusion drawn about NodeJS performance is flawed due to a quirk of the default settings in the AWS SDK for JS compared to other languages. By default, it opens and closes a TCP connection for each request. That overhead can be greater than the time actually needed to interact with DDB.

    I submitted a pull request to fix that configuration[0]. I expect the performance of NodeJS warm starts to look quite a bit better after that.

    [0]: https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-p...

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • > NodeJs is the slowest runtime, after some time it becomes better(JIT?) but still is not good enough. In addition, we see the NodeJS has the worst maximum duration.

    The conclusion drawn about NodeJS performance is flawed due to a quirk of the default settings in the AWS SDK for JS compared to other languages. By default, it opens and closes a TCP connection for each request. That overhead can be greater than the time actually needed to interact with DDB.

    I submitted a pull request to fix that configuration[0]. I expect the performance of NodeJS warm starts to look quite a bit better after that.

    [0]: https://github.com/Aleksandr-Filichkin/aws-lambda-runtimes-p...

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts