Ask HN: Do you load test your applications? If so, how?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • artillery

    Load testing at cloud-scale. Serverless & distributed out-of-the-box. Load test with Playwright. Load test HTTP APIs, GraphQL, WebSocket, and more. Use any Node.js module. Never fail to scale with Artillery!

  • I've used https://loader.io and https://www.artillery.io for performance testing. Both are pretty good.

  • locust

    Write scalable load tests in plain Python đźš—đź’¨

  • I’ve used Locust (https://locust.io/) which makes it easy to describe usage patterns and then spin up an arbitrary number of “users”. It provides a real-time web dashboard of the current state including counts of successful & failed requests.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • skywalking

    APM, Application Performance Monitoring System

  • I previously used https://k6.io/ in lieu of better options. It was great for getting up and running reasonably quickly, but also kind of had a weird JS runtime so the error messages weren't always intuitive so debugging was a pain.

    Then again, could also use anything like Apache JMeter (https://jmeter.apache.org/), Gatling (https://gatling.io/open-source/) or any other solution out there, whichever is better suited for the on-prem/cloud use case.

    That said, when time was limited and I literally didn't have the time to figure out how to test WebSocket connections and which resources the test should load, I literally cooked up a container image with Selenium (https://www.selenium.dev/) with Firefox/Chrome as a fully automated browser, for 1:1 behavior as real users would interact with the site.

    That was a horrible decision from a memory usage point of view, but an excellent one from time-saving and data quality perspectives, because the behavior was just like having 100-1000 users clicking through the site.

    Apart from that, you probably want something to aggregate the performance data of the app, be it something like Apache Skywalking (https://skywalking.apache.org/) or even Sentry (https://sentry.io/welcome/). Then you can probably ramp up the tests slowly over time in regards to how many parallel instances are generating load and see how the app reacts - the memory usage, CPU load, how many DB queries are done etc.

  • wrk2

    A constant throughput, correct latency recording variant of wrk

  • i use https://github.com/giltene/wrk2 pretty regularly.

    it has decent lua hooks to customize behavior but i use it in the dumbest way possible to hammer a server at a fixed rate with the same payload over and over.

    i run it by hand after a big change to the server to make sure nothing obviously regressed. i used to run it nightly in a jenkins job but 99% of the time no one looked at results. it was nice to see if assumptions on load a single node could handle didn't hold anymore.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts