Our great sponsors
-
-
At work, we've finally replaced a large part of a custom (mostly-)web backend with PostgREST recently, and that's quite a relief: considerably less code to maintain in that project now, and that was a rather awkward code. Something akin to PostgREST's "Embedding with Top-level Filtering" [1] had to be provided for all the tables, with OpenAPI schema and a typed API (Haskell + Servant); I avoided manually writing it all down, but at the cost of poking framework internals, and maintainability suffered. It was particularly annoying that the code doesn't really do anything useful, except for standing between a database and an HTTP client, and simply mimics the database anyway. Whenever a change had to be introduced, it was introduced into the database, the backend, and the frontend simultaneously, so it wasn't even useful for some kind of compatibility.
Now PostgREST handles all that, and only a few less trivial endpoints are handled by a custom backend (including streaming, which I'm considering replacing with postgrest-websocket [2] at some point).
During the switch to PostgREST, the encountered minor issues were those with inherited tables (had to set a bunch of computed/virtual columns [3] in order to "embed" those), and with a bug on filtering using such relations (turned out it was an already-fixed regression [4], so an update helped). Also a couple of helper stored procedures (to use via /rpc/) for updates in multiple tables at once (many-to-many relationships, to edit entities along with their relationships, using fewer requests) were added (though the old custom backend didn't have that), the security policies were set from the beginning, the frontend was rewritten (which allowed to finally switch without adding more work), so it was only left to cleanup the backend.
Not using views, since as mentioned above, database changes usually correspond to frontend changes, and the API doesn't have to be that stable yet.
Happy with it so far.
[1] https://postgrest.org/en/stable/api.html#embedding-with-top-...
[2] https://github.com/diogob/postgres-websockets
[3] https://postgrest.org/en/stable/api.html#computed-virtual-co...
-
Appwrite
Appwrite - The Open Source Firebase alternative introduces iOS support . Appwrite is an open source backend server that helps you build native iOS applications much faster with realtime APIs for authentication, databases, files storage, cloud functions and much more!
-
Absolutely love pocketbase.
And, interesting to note that using stored procedures within pocketbase isn't well supported because they do a dry run insert to check against constraints and then delete it if the constant fails.
https://github.com/pocketbase/pocketbase/discussions/650#dis...
I mention this because the OP inquired about stored procedures.
Still, pocketbase is so amazing.
-
react-admin
A frontend Framework for building B2B applications running in the browser on top of REST/GraphQL APIs, using ES6, React and Material Design
1 year ago: https://news.ycombinator.com/item?id=29389576
It's an insanely cool project, but I've yet to find a truly fitting use case for it. In theory, PostgREST combined with something like https://marmelab.com/react-admin/ should give you a free back-end and admin panel for most projects, but in practice, I've always found that all kinds of 'small details' won't be quite right out of the box, and that customization is really hard...
-
graphile-engine
Monorepo home of graphile-build, graphile-build-pg, graphile-utils, postgraphile-core and graphql-parse-resolve-info. Build a high-performance easily-extensible GraphQL schema by combining plugins!
-
postgraphile
Execute one command (or mount one Node.js middleware) and get an instant high-performance GraphQL API for your PostgreSQL database!
I was about to say “but this one is!” and realized I had confused PostgREST with PostGraphile. If you’re interested in GraphQL, you can check out PostGraphile here: https://github.com/graphile/postgraphile
-
Directus
The Modern Data Stack 🐰 — Directus is an instant REST+GraphQL API and intuitive no-code data collaboration app for any SQL database.
Directus is actually one. You get the front end which is basically an admin UI, but you also get a full blown fully featured REST API
-
InfluxDB
Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.
-
we integrated Gotrue with PostgREST at Supabase and it works beautifully together https://github.com/supabase/gotrue (forked from Netlify)
-
You can use a library like https://github.com/agentultra/postgresql-replicant or similar to hook in a data plane and use PostgREST as a data plane.
Your business logic works on the event stream that comes from the WAL.
-
I think the recommended way to use Postgrest is to put a layer of views and optionally stored functions on top of your schema to decouple it from your API. Take a look at this Postgrest starter kit[1] which uses a separate API schema for this purpose.
-
At work, we've finally replaced a large part of a custom (mostly-)web backend with PostgREST recently, and that's quite a relief: considerably less code to maintain in that project now, and that was a rather awkward code. Something akin to PostgREST's "Embedding with Top-level Filtering" [1] had to be provided for all the tables, with OpenAPI schema and a typed API (Haskell + Servant); I avoided manually writing it all down, but at the cost of poking framework internals, and maintainability suffered. It was particularly annoying that the code doesn't really do anything useful, except for standing between a database and an HTTP client, and simply mimics the database anyway. Whenever a change had to be introduced, it was introduced into the database, the backend, and the frontend simultaneously, so it wasn't even useful for some kind of compatibility.
Now PostgREST handles all that, and only a few less trivial endpoints are handled by a custom backend (including streaming, which I'm considering replacing with postgrest-websocket [2] at some point).
During the switch to PostgREST, the encountered minor issues were those with inherited tables (had to set a bunch of computed/virtual columns [3] in order to "embed" those), and with a bug on filtering using such relations (turned out it was an already-fixed regression [4], so an update helped). Also a couple of helper stored procedures (to use via /rpc/) for updates in multiple tables at once (many-to-many relationships, to edit entities along with their relationships, using fewer requests) were added (though the old custom backend didn't have that), the security policies were set from the beginning, the frontend was rewritten (which allowed to finally switch without adding more work), so it was only left to cleanup the backend.
Not using views, since as mentioned above, database changes usually correspond to frontend changes, and the API doesn't have to be that stable yet.
Happy with it so far.
[1] https://postgrest.org/en/stable/api.html#embedding-with-top-...
[2] https://github.com/diogob/postgres-websockets
[3] https://postgrest.org/en/stable/api.html#computed-virtual-co...
-
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
-
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
-
https://github.com/PostgREST/postgrest/tree/main/nix
I couldn't for the life of me figure out how to play with this. I kind of don't believe "docker build ." is worse but I might be missing something.
After I ejected out of playing with it on Mac OS, I found: https://github.com/NixOS/nix/issues/458#issuecomment-1019743...
Over 13 parts to remove NixOS from Mac OS, involving reboots, /etc/fstab, OS level users, daemons, etc.
-
One of the project which list all the data for entreprise addresses uses Postgrest (open data SIRENE de l'INSEE).
Full fledge project implemented, interesting use of PostgREST
-
-
pREST
PostgreSQL ➕ REST, low-code, simplify and accelerate development, ⚡ instant, realtime, high-performance on any Postgres application, existing or new
Pretty sure I started with this: https://github.com/prest/prest/blob/main/cmd/root.go
And from there you can execute your own command and add handlers or other things as you wish.
-
-
Soon (in the works now, it will be particularly interesting in the context of cloudflare workers talking to planetscale)
-
Sonar
Write Clean JavaScript Code. Always.. Sonar helps you commit clean code every time. With over 300 unique rules to find JavaScript bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.