postgres-websockets
roapi
postgres-websockets | roapi | |
---|---|---|
1 | 24 | |
338 | 3,080 | |
- | 0.8% | |
7.2 | 6.9 | |
6 months ago | about 1 month ago | |
Haskell | Rust | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
postgres-websockets
-
PostgREST – Serve a RESTful API from Any Postgres Database
At work, we've finally replaced a large part of a custom (mostly-)web backend with PostgREST recently, and that's quite a relief: considerably less code to maintain in that project now, and that was a rather awkward code. Something akin to PostgREST's "Embedding with Top-level Filtering" [1] had to be provided for all the tables, with OpenAPI schema and a typed API (Haskell + Servant); I avoided manually writing it all down, but at the cost of poking framework internals, and maintainability suffered. It was particularly annoying that the code doesn't really do anything useful, except for standing between a database and an HTTP client, and simply mimics the database anyway. Whenever a change had to be introduced, it was introduced into the database, the backend, and the frontend simultaneously, so it wasn't even useful for some kind of compatibility.
Now PostgREST handles all that, and only a few less trivial endpoints are handled by a custom backend (including streaming, which I'm considering replacing with postgrest-websocket [2] at some point).
During the switch to PostgREST, the encountered minor issues were those with inherited tables (had to set a bunch of computed/virtual columns [3] in order to "embed" those), and with a bug on filtering using such relations (turned out it was an already-fixed regression [4], so an update helped). Also a couple of helper stored procedures (to use via /rpc/) for updates in multiple tables at once (many-to-many relationships, to edit entities along with their relationships, using fewer requests) were added (though the old custom backend didn't have that), the security policies were set from the beginning, the frontend was rewritten (which allowed to finally switch without adding more work), so it was only left to cleanup the backend.
Not using views, since as mentioned above, database changes usually correspond to frontend changes, and the API doesn't have to be that stable yet.
Happy with it so far.
[1] https://postgrest.org/en/stable/api.html#embedding-with-top-...
[2] https://github.com/diogob/postgres-websockets
[3] https://postgrest.org/en/stable/api.html#computed-virtual-co...
[4] https://github.com/PostgREST/postgrest/issues/2530
roapi
- Full-fledged APIs for slowly moving datasets without writing code
-
Tuql: Automatically create a GraphQL server from a SQLite database
If your use case is read-only I suggest taking a look at roapi[1]. It supports multiple read frontends (GraphQL, SQL, REST) and many backends like SQLite, JSON, google sheets, MySQL, etc.
[1] https://github.com/roapi/roapi
- Who is using AXUM in production?
-
Ask HN: Best way to provide access to large data sets
For smaller datasets then anywhere up to a few mb which isn't so bad reasonable with an API but in theory for historic data it could be up to several gb. I've not seen datasette go that high (IIRC it's a 1000 row return limit by default).
That's what got me intrigued with Atlassians offering, as data lakes tend to be something internal to a company, not something I've ever seen offered as an interaction point to users.
I've also tested out roapi [1] which is nice if the data is in some structured format already (Parquet/JSON)
[1] https://github.com/roapi/roapi
-
"thread 'main' panicked at 'no CA certificates found'", when running application in docker container
https://github.com/roapi/roapi/issues/103?
- Roapi 0.9 release adds support for all cloud storage providers
-
SQLite-based databases on the Postgres protocol? Yes we can
Very cool and well executed project. Love the sprinkle of Rust in all the other companion projects as well :)
The ROAPI(https://github.com/roapi/roapi) project I built also happened to support a similar feature set, i.e. to expose sqlite through a variety of remote query interfaces including pg wire protocols, rest apis and graphqls.
- Using Rust to write a Data Pipeline. Thoughts. Musings.
-
PostgREST – Serve a RESTful API from Any Postgres Database
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
[0] https://seafowl.io/
[1] https://www.splitgraph.com/connect
[2] https://github.com/roapi/roapi
-
Command-line data analytics made easy
It could be the NDJSON parser (DF source: [0]) or could be a variety of other factors. Looking at the ROAPI release archive [1], it doesn't ship with the definitive `columnq` binary from your comment, so it could also have something to do with compilation-time flags.
FWIW, we use the Parquet format with DataFusion and get very good speeds similar to DuckDB [2], e.g. 1.5s to run a more complex aggregation query `SELECT date_trunc('month', tpep_pickup_datetime) AS month, COUNT(*) AS total_trips, SUM(total_amount) FROM tripdata GROUP BY 1 ORDER BY 1 ASC)` on a 55M row subset of NY Taxi trip data.
[0]: https://github.com/apache/arrow-datafusion/blob/master/dataf...
[1]: https://github.com/roapi/roapi/releases/tag/roapi-v0.8.0
[2]: https://observablehq.com/@seafowl/benchmarks
What are some alternatives?
postgrest - REST API for any Postgres database
php-parquet - PHP implementation for reading and writing Apache Parquet files/streams. NOTICE: Please migrate to https://github.com/codename-hub/php-parquet.
graphql-api - Write type-safe GraphQL services in Haskell
qframe - Immutable data frame for Go
graphql - Haskell GraphQL implementation
materialize - The data warehouse for operational workloads.
gc-monitoring-wai - a wai application to show `GHC.Stats.GCStats`
delta-rs - A native Rust library for Delta Lake, with bindings into Python
raml - RESTful API Modeling Language (RAML) library for Haskell
fluvio - Lean and mean distributed stream processing system written in rust and web assembly.
simpleconfig
datasette - An open source multi-tool for exploring and publishing data