Cloud-scale performance & reliability testing for developers & SREs
Scalable user load testing tool written in Python
I’ve used Locust (https://locust.io/) which makes it easy to describe usage patterns and then spin up an arbitrary number of “users”. It provides a real-time web dashboard of the current state including counts of successful & failed requests.
Truly a developer’s best friend. Scout APM is great for developers who want to find and fix performance issues in their applications. With Scout, we'll take care of the bugs so you can focus on building great things 🚀.
APM, Application Performance Monitoring System
I previously used https://k6.io/ in lieu of better options. It was great for getting up and running reasonably quickly, but also kind of had a weird JS runtime so the error messages weren't always intuitive so debugging was a pain.
Then again, could also use anything like Apache JMeter (https://jmeter.apache.org/), Gatling (https://gatling.io/open-source/) or any other solution out there, whichever is better suited for the on-prem/cloud use case.
That said, when time was limited and I literally didn't have the time to figure out how to test WebSocket connections and which resources the test should load, I literally cooked up a container image with Selenium (https://www.selenium.dev/) with Firefox/Chrome as a fully automated browser, for 1:1 behavior as real users would interact with the site.
That was a horrible decision from a memory usage point of view, but an excellent one from time-saving and data quality perspectives, because the behavior was just like having 100-1000 users clicking through the site.
Apart from that, you probably want something to aggregate the performance data of the app, be it something like Apache Skywalking (https://skywalking.apache.org/) or even Sentry (https://sentry.io/welcome/). Then you can probably ramp up the tests slowly over time in regards to how many parallel instances are generating load and see how the app reacts - the memory usage, CPU load, how many DB queries are done etc.
A constant throughput, correct latency recording variant of wrk
i use https://github.com/giltene/wrk2 pretty regularly.
it has decent lua hooks to customize behavior but i use it in the dumbest way possible to hammer a server at a fixed rate with the same payload over and over.
i run it by hand after a big change to the server to make sure nothing obviously regressed. i used to run it nightly in a jenkins job but 99% of the time no one looked at results. it was nice to see if assumptions on load a single node could handle didn't hold anymore.
CloudRun min max
1 project | reddit.com/r/googlecloud | 9 Sep 2022
API Performance Testing Tools: JMeter, Taurus, and BlazeMeter
1 project | reddit.com/r/programming | 16 Aug 2022
How does one benchmark performance for async functions?
1 project | reddit.com/r/django | 10 Jun 2022
Is there a way to test the scalability of a web server (or any type of server)?
2 projects | reddit.com/r/learnprogramming | 9 May 2022
Why am I getting a 403 error when running Locust?
1 project | reddit.com/r/codehunter | 3 May 2022