db-benchmark
csvs-to-sqlite
Our great sponsors
db-benchmark | csvs-to-sqlite | |
---|---|---|
54 | 4 | |
219 | 692 | |
4.6% | - | |
2.7 | 3.0 | |
about 2 months ago | 3 months ago | |
R | Python | |
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
db-benchmark
-
Will Rust-based data frame library Polars dethrone Pandas? We evaluate on 1M+ Stack Overflow questions
We didn't conduct our own benchmarks for this post, but in this comparison from ~1 year ago, Polars emerged as the fastest https://h2oai.github.io/db-benchmark/
-
Amazon Redshift Re-Invented
No major issues but the JavaScript bindings (which are different to their wasm bindings) that I use leave a lot to be desired. To DuckDB's credit, they seem to have top-notch CPP and Python bindings that even support the efficient memory-mapped Arrow format that's super-efficient in cross-language / cross-process scenarios in addition to being a top-notch in-memory representation of Panda-like data-frames.
DuckDB's is in constant development but doesn't yet have native cross-version export/import feature (since its developers claim DuckDB hasn't reached maturity to stabilise its on-disk formats just yet).
I also keep an eye on https://h2oai.github.io/db-benchmark/ Pola.rs and DataFusion sound the most exciting.
It also remains to be seen how DataBrick's delta.io develops (might come in handy for much much larger datasets).
- C++ for data analysis
-
Fast Lane to Learning R
I strongly recommend data.table R. Tidyverse is an improvement on base R, no question. Data.table has less intuitive syntax and can be harder to learn, but is lightning fast and memory efficient. If you're working with more than 1M rows, you should be using data.table.
Here are some benchmarks: https://h2oai.github.io/db-benchmark/
-
Friendlier SQL with DuckDB
Hi, good to hear that you guys care about testing. One thing apart from the Github issues that led me to believe it might not be super stable yet was the benchmark results on https://h2oai.github.io/db-benchmark/ which make it look like it couldn't handle the 50GB case due to a out of memory error. I see that the benchmark and the used versions are about a year old so maybe things changed a lot since then. Can you chime in regarding the current story of running bigger DBs like 1TB on a machine with just 32GB or so RAM? Especially regardung data mutations and DDL queries. Thanks!
-
I used a new dataframe library (polars) to wrangle 300M prices and discover some of the most expensive hospitals in America. Code/notebook in article
Per these benchmarks it appears Polars is an order of magnitude more performant and it's lazy and Rust is just kinda sexy.
-
Benchmarking for loops vs apply and others
This is a much more comprehensive set of benchmarks: https://h2oai.github.io/db-benchmark/
-
Why is R viewed badly over here? Also, as a college student, should I prioritize Python instead?
Its not like pandas is faster than tidyverse either on all the bechmarks, and data.table is faster than both. https://h2oai.github.io/db-benchmark/
-
Resources for data cleaning
Language isn't really important here; what's important is tooling, and R definitely has the tooling. I would look at this benchmark reference for database-like operations, and you'll see that data.table (a very fast and memory-efficient R package) consistently ranks as one of the fastest tools out there that can also support a wide range of memory loads.
- The fastest tool for querying large JSON files is written in Python (benchmark)
csvs-to-sqlite
-
Turning database into a searchable dashboard?
Oh what's that you say, your data is in CSV and you don't want to write code to load them up into a database, well try this https://github.com/simonw/csvs-to-sqlite
-
Show HN: Work with CSV files using SQL. For data scientists and engineers
The datasette author offers this tool for conversion: https://github.com/simonw/csvs-to-sqlite
-
Datasette 0.58: The annotated release notes
There's csvs-to-sqlite which allows converting CSVs to SQLite (dumping part of another database to CSV should be more or less trivial). There's also Dogsheep, which can convert more esoteric data sources like GitHub and HackerNews to SQLite. Recently, Simon worked on Django SQL Dashboard, which brings a subset of Datasette to Django.
-
I made a regexp cheatsheet for grep, sed, awk and highlighted differences between them
And sometimes it's nice to throw csv files into a database. You can do that with https://github.com/simonw/csvs-to-sqlite
What are some alternatives?
arrow-datafusion - Apache Arrow DataFusion SQL Query Engine
polars - Fast multi-threaded DataFrame library in Rust | Python | Node.js
databend - A modern Elasticity and Performance cloud data warehouse, activate your object storage for real-time analytics.
sktime - A unified framework for machine learning with time series
disk.frame - Fast Disk-Based Parallelized Data Manipulation Framework for Larger-than-RAM Data
DataFramesMeta.jl - Metaprogramming tools for DataFrames
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
arrow2 - Unofficial transmute-free Rust library to work with the Arrow format
julia - The Julia Programming Language
DataFrame - C++ DataFrame for statistical, Financial, and ML analysis -- in modern C++ using native types and contiguous memory storage
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
sqlitebrowser - Official home of the DB Browser for SQLite (DB4S) project. Previously known as "SQLite Database Browser" and "Database Browser for SQLite". Website at: