textql
duckdb
textql | duckdb | |
---|---|---|
15 | 52 | |
9,034 | 17,221 | |
- | 7.1% | |
3.7 | 10.0 | |
7 months ago | 4 days ago | |
Go | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
textql
-
Jaq – A jq clone focused on correctness, speed, and simplicity
I like textql [0] better for this use case, as it's simpler in my mind.
[0] https://github.com/dinedal/textql
-
Can SQL be used without an RDBMS?
Primarily, you are right. SQL is for working with structured data and that largely covers RDBs. However, there are tools (like textql) that allow you to query CSV files and the effort works with most other text files that have some kind of structure.
- Textql: Execute SQL against structured text like CSV or TSV
-
Show HN: ClickHouse-local – a small tool for serverless data analytics
As the author of textql ( https://github.com/dinedal/textql ) - thanks for the shoutout!
Looks great, I love more options in the space for CLI based data analysis tools! Fantastic work!
- Using Commandline To Process CSV files
-
sqly - execute SQL against CSV / JSON with shell
Apparently, there were many who thought the same thing; Tools to execute SQL against CSV were trdsql, q, csvq, TextQL. They were highly functional, hoewver, had many options and no input completion. I found it just a little difficult to use.
-
Q – Run SQL Directly on CSV or TSV Files
Reminds me of the textQL extension that's available in Asciidoc.
Point it to an external CSV file, enable TextQL, and bam, there's your query returned as a table. Handy for parts lists, inventory, that kind of crap.
https://github.com/dinedal/textql
https://gist.github.com/mojavelinux/8856117
-
Beginner interested in learning SQL. Have a few question that I wasn’t able to find on google.
Through more magic, you COULD of course use stuff like Spark, or easier with programs like TextQL, sq, OctoSQL.
-
textql VS trdsql - a user suggested alternative
2 projects | 25 Jun 2022
-
Xlite: Query Excel, Open Document spreadsheets (.ods) as SQLite virtual tables
Somewhat-kinda related, the textql extension for Asciidoctor is so dang useful it should be in core.
https://gist.github.com/mojavelinux/8856117
I use this as a "centralized parts repository" for big ol' maintenance manuals. Refresh from PDM/PLM/LSA/Whatever. Rebuild for new parts data.
Built on TextQL, natch
https://github.com/dinedal/textql
duckdb
- 🪄 DuckDB sql hack : get things SORTED w/ constraint CHECK
- DuckDB: Move to push-based execution model (2021)
-
DuckDB performance improvements with the latest release
I'm not sure if the fix is reassuring or not: https://github.com/duckdb/duckdb/pull/9411/files
-
Building a Distributed Data Warehouse Without Data Lakes
It's an interesting question!
The problem is that the data is spread everywhere - no choice about that. So with that in mind, how do you query that data? Today, the idea is that you HAVE to put it into a central location. With tools like Bacalhau[1] and DuckDB [2], you no longer have to - a single query can be sharded amongst all your data - EFFECTIVELY giving you a lot of what you want from a data lake.
It's not a replacement, but if you can do a few of these items WITHOUT moving the data, you will be able to see really significant cost and time savings.
[1] https://github.com/bacalhau-project/bacalhau
[2] https://github.com/duckdb/duckdb
- DuckDB 0.9.0
-
Push or Pull, is this a question?
[4] Switch to Push-Based Execution Model by Mytherin · Pull Request #2393 · duckdb/duckdb (github.com)
-
Show HN: Hydra 1.0 – open-source column-oriented Postgres
it depends on your query obviously.
In general, I did very deep benchmarking of pg, clickhouse and duckdb, and I sure didn't make stupid mistakes like this: https://news.ycombinator.com/item?id=36990831
My dataset has 50B rows and 2tb of data, and I think columnar dbs are very overhiped and I chose pg because:
- pg performance is acceptable, maybe 2-3x times slower than clickhouse and duckdb on some queries if pg is configured correctly and run on compressed storage
- clickhouse and duckdb start falling apart very fast because they specialized on very narrow type of queries: https://github.com/ClickHouse/ClickHouse/issues/47520 https://github.com/ClickHouse/ClickHouse/issues/47521 https://github.com/duckdb/duckdb/discussions/6696
-
🦆 Effortless Data Quality w/duckdb on GitHub ♾️
This action installs duckdb with the version provided in input.
-
Using SQL inside Python pipelines with Duckdb, Glaredb (and others?)
Duckdb: https://github.com/duckdb/duckdb - seems pretty popular, been keeping an eye on this for close to a year now.
-
CSV or Parquet File Format
The Parquet-Go library is very complex, not yet success to use it. So I ask whether DuckDB can provide API https://github.com/duckdb/duckdb/issues/7776
What are some alternatives?
q - q - Run SQL directly on delimited files and multi-file sqlite databases
ClickHouse - ClickHouse® is a free analytics DBMS for big data
go-duckdb - go-duckdb provides a database/sql driver for the DuckDB database engine.
sqlite-worker - A simple, and persistent, SQLite database for Web and Workers.
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
datasette - An open source multi-tool for exploring and publishing data
cq - Query CSVs using SQL
dsq - Commandline tool for running SQL queries against JSON, CSV, Excel, Parquet, and more.
metabase-clickhouse-driver - ClickHouse database driver for the Metabase business intelligence front-end
brackit - Query processor with proven optimizations, ready to use for your JSON store to query semi-structured data with JSONiq. Can also be used as an ad-hoc in-memory query processor.
datafusion - Apache DataFusion SQL Query Engine