squirrel
trdsql
Our great sponsors
squirrel | trdsql | |
---|---|---|
2 | 9 | |
18 | 1,750 | |
- | - | |
8.1 | 8.3 | |
22 days ago | 16 days ago | |
Go | Go | |
Mozilla Public License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
squirrel
-
Billion File Filesystem
I built https://github.com/anacrolix/squirrel for just this purpose. However you might find that batch inserting empty files might only be 10x faster or so than creating empty files directly on the filesystem if we are to believe the 10k/s the OP achieved.
-
Embedded write-heavy on-disk cache, write-amplification
I'm looking for an embedded database/KV-store that supports a write-heavy workload of large blocks of bytes and some kind of eviction policy. I'm currently using sqlite3 with a bunch of triggers and the blob API, but it's not really suitable for write-heavy workloads. I've currently exposed the interface somewhat in https://github.com/anacrolix/squirrel, the primary use case is from https://github.com/anacrolix/torrent. My recent research suggests an LSM-based KV-store like rocksdb or leveldb, but those don't have great interfaces in Go, and don't seem to support an eviction policy as far as I can tell (which is surprising given they would be very well suited to it). There are some alternatives like buntdb, but those all look designed for smaller/string values.
trdsql
-
sqly - execute SQL against CSV / JSON with shell
Apparently, there were many who thought the same thing; Tools to execute SQL against CSV were trdsql, q, csvq, TextQL. They were highly functional, hoewver, had many options and no input completion. I found it just a little difficult to use.
-
Run SQL on CSV, Parquet, JSON, Arrow, Unix Pipes and Google Sheet
Nice! Kinds of reminds me of trdsql
-
textql VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
trdsql can execute SQL against CSV, LTSV, JSON and TBLN, and the database can also use SQLite,PostgreSQL and MySQL.
-
q VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
trdsql can execute SQL against CSV, LTSV, JSON and TBLN, and the database can also use SQLite,PostgreSQL and MySQL.
-
If you want to run SQL queries on CSV files from the command line without installing/opening any DBMS software, use CSVKIT
The trdsql I made is feature-rich and fast (as a similar SQL tool).
What are some alternatives?
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
querycsv - QueryCSV enables you to load CSV files and manipulate them using SQL queries then after you finish you can export the new values to a CSV file
gsheet - gsheet is a CLI tool (and Golang package) for piping csv data to and from Google Sheets
grafana-sqlite-datasource - Grafana Plugin to enable SQLite as a Datasource
usql - Universal command-line interface for SQL databases
json-watch - A small cli tool for monitoring JSON data for new items
xquery-cli - A command-line tool for XQuery
null - reasonable handling of nullable values
csvkit - A suite of utilities for converting to and working with CSV, the king of tabular file formats.
xsv - A fast CSV command line toolkit written in Rust.
textql - Execute SQL against structured text like CSV or TSV
xyr - Query any data source using SQL, works with the local filesystem, s3, and more. It should be a very tiny and lightweight alternative to AWS Athena, Presto ... etc.