trdsql
roapi
trdsql | roapi | |
---|---|---|
9 | 24 | |
1,881 | 3,080 | |
- | 2.0% | |
8.3 | 6.9 | |
8 days ago | about 1 month ago | |
Go | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trdsql
-
sqly - execute SQL against CSV / JSON with shell
Apparently, there were many who thought the same thing; Tools to execute SQL against CSV were trdsql, q, csvq, TextQL. They were highly functional, hoewver, had many options and no input completion. I found it just a little difficult to use.
-
Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
Nice! Kinds of reminds me of trdsql
-
What is the most user friendly way to upload CSV files into a SQL database?
trdsql(https://github.com/noborus/trdsql) is a tool that executes SQL for files such as CSV, but it can also be used by connecting to MySQL. Therefore, SQL such as CREATE TABLE table AS SELECT * FROM csvfile.csv can also be executed.
-
textql VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
trdsql can execute SQL against CSV, LTSV, JSON and TBLN, and the database can also use SQLite,PostgreSQL and MySQL.
-
q VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
trdsql can execute SQL against CSV, LTSV, JSON and TBLN, and the database can also use SQLite,PostgreSQL and MySQL.
-
One-liner for running queries against CSV files with SQLite
Check out https://github.com/noborus/trdsql
- trdsql: CLI tool that can execute SQL queries on CSV, LTSV, JSON and TBLN
- Show HN: Trdsql – CLI tool that can execute SQL queries on CSV, LTSV, JSON
-
If you want to run SQL queries on CSV files from the command line without installing/opening any DBMS software, use CSVKIT
The trdsql I made is feature-rich and fast (as a similar SQL tool).
roapi
- Full-fledged APIs for slowly moving datasets without writing code
-
Tuql: Automatically create a GraphQL server from a SQLite database
If your use case is read-only I suggest taking a look at roapi[1]. It supports multiple read frontends (GraphQL, SQL, REST) and many backends like SQLite, JSON, google sheets, MySQL, etc.
[1] https://github.com/roapi/roapi
- Who is using AXUM in production?
-
Ask HN: Best way to provide access to large data sets
For smaller datasets then anywhere up to a few mb which isn't so bad reasonable with an API but in theory for historic data it could be up to several gb. I've not seen datasette go that high (IIRC it's a 1000 row return limit by default).
That's what got me intrigued with Atlassians offering, as data lakes tend to be something internal to a company, not something I've ever seen offered as an interaction point to users.
I've also tested out roapi [1] which is nice if the data is in some structured format already (Parquet/JSON)
[1] https://github.com/roapi/roapi
-
"thread 'main' panicked at 'no CA certificates found'", when running application in docker container
https://github.com/roapi/roapi/issues/103?
- Roapi 0.9 release adds support for all cloud storage providers
-
SQLite-based databases on the Postgres protocol? Yes we can
Very cool and well executed project. Love the sprinkle of Rust in all the other companion projects as well :)
The ROAPI(https://github.com/roapi/roapi) project I built also happened to support a similar feature set, i.e. to expose sqlite through a variety of remote query interfaces including pg wire protocols, rest apis and graphqls.
- Using Rust to write a Data Pipeline. Thoughts. Musings.
-
PostgREST – Serve a RESTful API from Any Postgres Database
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
[0] https://seafowl.io/
[1] https://www.splitgraph.com/connect
[2] https://github.com/roapi/roapi
-
Command-line data analytics made easy
It could be the NDJSON parser (DF source: [0]) or could be a variety of other factors. Looking at the ROAPI release archive [1], it doesn't ship with the definitive `columnq` binary from your comment, so it could also have something to do with compilation-time flags.
FWIW, we use the Parquet format with DataFusion and get very good speeds similar to DuckDB [2], e.g. 1.5s to run a more complex aggregation query `SELECT date_trunc('month', tpep_pickup_datetime) AS month, COUNT(*) AS total_trips, SUM(total_amount) FROM tripdata GROUP BY 1 ORDER BY 1 ASC)` on a 55M row subset of NY Taxi trip data.
[0]: https://github.com/apache/arrow-datafusion/blob/master/dataf...
[1]: https://github.com/roapi/roapi/releases/tag/roapi-v0.8.0
[2]: https://observablehq.com/@seafowl/benchmarks
What are some alternatives?
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
php-parquet - PHP implementation for reading and writing Apache Parquet files/streams. NOTICE: Please migrate to https://github.com/codename-hub/php-parquet.
querycsv - QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file
qframe - Immutable data frame for Go
gsheet - gsheet is a CLI tool (and Golang package) for piping csv data to and from Google Sheets
materialize - The data warehouse for operational workloads.
grafana-sqlite-datasource - Grafana Plugin to enable SQLite as a Datasource
delta-rs - A native Rust library for Delta Lake, with bindings into Python
usql - Universal command-line interface for SQL databases
fluvio - Lean and mean distributed stream processing system written in rust and web assembly.
json-watch - A small cli tool for monitoring JSON data for new items
datasette - An open source multi-tool for exploring and publishing data