

-
testcontainers-node
Testcontainers is a NodeJS library that supports tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.
Just use Testcontainers (https://testcontainers.com/). We use it for quickly spinning up a temporary postgres instance to run our db tests against.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
I thought this was common knowledge and that it became even easier after Docker became a thing?
Mocks are wishful thinking incarnate most of the time, though here and there they are absolutely needed (like 3rd party APIs without sandbox environments, or quite expensive API, or most of the time: both).
Just pick a task runner -- I use just[0] -- and make a task that brings up both Docker and your containers, then run your test task, done. Sure it's a bit fiddly the first time around but I've seen juniors go beyond that in a day maximum and then your tests actually work with the real world 99% of the time.
Mocks in general are rarely worth it, the DB ones: 10x so.
[0] https://github.com/casey/just
-
Does something like PGlite work for your use case? https://pglite.dev/
-
-
Django is a Python-based ORM that works really well and has a large community.
See https://www.djangoproject.com/.
If I have to do anything CRUD like, I'll use Django. For reporting apps, I prefer native SQL.
-
Here's what I've found — https://github.com/peterldowns/pgtestdb?tab=readme-ov-file#h...
If you come up with any better options please let me know so I can update this readme!
-
data-caterer
Test data management tool for any data source, batch or real-time. Generate, validate and clean up data all in one tool.
I've taken a stab at making a solution for it via https://github.com/data-catering/data-caterer. It focuses on making integration tests easier by generating data across batch and real-time data sources, whilst maintaining any relationships across the datasets. You can automatically set it to pick up the schema definition from the metadata in your database to generate data for it. Once your app/job/data consumer(s) use the data, you can run data validations to ensure it runs as expected. Then you can clean up the data at the end (including data pushed to downstream data sources) if run in a shared test environment or locally. All of this runs within 60 seconds.
It also gives you the option of running other types of tests such as load/performance/stress testing via generating larger amounts of data.
-
Nutrient
Nutrient - The #1 PDF SDK Library. Bad PDFs = bad UX. Slow load times, broken annotations, clunky UX frustrates users. Nutrient’s PDF SDKs gives seamless document experiences, fast rendering, annotations, real-time collaboration, 100+ features. Used by 10K+ devs, serving ~half a billion users worldwide. Explore the SDK for free.
-
5. Once ExUnit is finished, delete all shards.
Both projects follow a similar approach (I wrote it first in FeebDB and copied into HackerExperience, which has some sections commented out -- I need to clean up this part of the codebase).
For both projects, you will find steps 1/5 in `test/support/db.ex`, step 2 in `test/support/db/prop.ex` and steps 3/4 in `test/support/case/db.ex`.
FeebDB: https://github.com/renatomassaro/FeebDB/
-
HackerExperience: https://github.com/HackerExperience/HackerExperience/
Email is in profile in case you have follow up questions/comments :)