quokka
spyql
quokka | spyql | |
---|---|---|
23 | 23 | |
1,084 | 902 | |
- | - | |
8.3 | 0.0 | |
8 months ago | over 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quokka
-
How Query Engines Work
An awesome read!
Something related that I found out about from HN a few months back is another engine called quokka. It's particularly interesting and applicable how quokka schedules distributed queries to outperform Spark https://github.com/marsupialtail/quokka/blob/master/blog/why...
- Quokka – Distributed Polars on Ray
-
Algorithmic Trading with Go
Hi Justin, you might be interested in my blog: https://github.com/marsupialtail/quokka/blob/master/blog/bac... advocating a cloud based approach.
You don't have to use the system I am building, but it's worth thinking about that design.
-
Daft: A High-Performance Distributed Dataframe Library for Multimodal Data
SQL support is very challenging.
I work on Quokka (https://github.com/marsupialtail/quokka). I support Iceberg reads. Recently we are adding SQL support from just parsing the DuckDB logical plan, though that is very challenging as well.
The Python world lacks a standard for a plug and play SQL query optimizer. Apache Calcite is good for the JVM world, but not great if you are trying to cut out the JVM.
- Why your dataframe library needs to understand vector embeddings
-
The Inner Workings of Distributed Databases
In case people are interested, I wrote a post about fault tolerance strategies of data systems like Spark and Flink: https://github.com/marsupialtail/quokka/blob/master/blog/fau...
The key difference here is that these systems don't store data, so fault tolerance means recovering within a query instead of not losing data.
-
Launch HN: DAGWorks – ML platform for data science teams
would love to collaborate on an integration with pyquokka (https://github.com/marsupialtail/quokka) once I put out a stable release end of this month :-)
-
is spark always your go to solution ?
Then you should keep an eye on quokka. This may become the "Spark" for Polars/DuckDB. It seems to be under active development though I'm not sure how stable it is.
- Distributed fault tolerance made simple
- Fault tolerance for distributed data systems is quite simple
spyql
-
Fq: Jq for Binary Formats
I prefer a SQL-like format. It’s not as complete but it cover most of the day-to-day use cases. Take a look at https://github.com/dcmoura/spyql (I am the author). Congrats on fq!
-
Command-line data analytics made easy with SPyQL
SPyQL documentation: spyql.readthedocs.io
-
This Week In Python
spyql – Query data on the command line with SQL-like SELECTs powered by Python expressions
- Command-line data analytics made easy
-
Jc – JSONifies the output of many CLI tools
This is great!
I am the author of SPyQL [1]. Combining JC with SPyQL you can easily query the json output and run python commands on top of it from the command-line :-) You can do aggregations and so forth in a much simpler and intuitive way than with jq.
I just wrote a blogpost [2] that illustrates it. It is more focused on CSV, but the commands would be the same if you were working with JSON.
[1] https://github.com/dcmoura/spyql
- The fastest command-line tools for querying large JSON datasets
-
Working with more than 10gb csv
You can import the data into a PostgreSQL/MySQL/SQLite/... database and then query the database. However, even with the right choice of indexes, it might take a while to run queries on a table with hundreds of millions of records. You can easily import your data to these databases with SpyQL: $ spyql "SELECT * FROM csv TO sql(table=my_table_name) | sqlite3 my.db" (you would need to create the table my_table_name before running the command).
-
ClickHouse Cloud is now in Public Beta
https://github.com/dcmoura/spyql/blob/master/notebooks/json_...
And ClickHouse looks like a normal relational database - there is no need for multiple components for different tiers (like in Druid), no need for manual partitioning into "daily", "hourly" tables (like you do in Spark and Bigquery), no need for lambda architecture... It's refreshing how something can be both simple and fast.
- A SQLite extension for reading large files line-by-line
-
I want to convert a large JSON file into Tabular Format.
I thought this library was pretty nifty for json. It's also relatively fast compared to most json parsers: https://github.com/dcmoura/spyql
What are some alternatives?
opteryx - 🦖 A SQL-on-everything Query Engine you can execute over multiple databases and file formats. Query your data, where it lives.
prql - PRQL is a modern language for transforming data — a simple, powerful, pipelined SQL replacement
cempaka - "Write a trading bot which buys low and sells high." Sounds simple enough, right?
malloy - Malloy is an experimental language for describing data relationships and transformations.
awesome-pipeline - A curated list of awesome pipeline toolkits inspired by Awesome Sysadmin
tresql - Shorthand SQL/JDBC wrapper language, providing nested results as JSON and more
pg8000 - A Pure-Python PostgreSQL Driver
Preql - An interpreted relational query language that compiles to SQL.
blog - Some notes on things I find interesting and important.
prosto - Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
sqlglot - Python SQL Parser and Transpiler
pxi - 🧚 pxi (pixie) is a small, fast, and magical command-line data processor similar to jq, mlr, and awk.