csvs-to-sqlite
db-benchmark
csvs-to-sqlite | db-benchmark | |
---|---|---|
4 | 91 | |
859 | 320 | |
- | 0.0% | |
0.0 | 0.0 | |
4 months ago | 10 months ago | |
Python | R | |
Apache License 2.0 | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
csvs-to-sqlite
-
Turning database into a searchable dashboard?
Oh what's that you say, your data is in CSV and you don't want to write code to load them up into a database, well try this https://github.com/simonw/csvs-to-sqlite
-
Show HN: Work with CSV files using SQL. For data scientists and engineers
The datasette author offers this tool for conversion: https://github.com/simonw/csvs-to-sqlite
-
Datasette 0.58: The annotated release notes
There's csvs-to-sqlite which allows converting CSVs to SQLite (dumping part of another database to CSV should be more or less trivial). There's also Dogsheep, which can convert more esoteric data sources like GitHub and HackerNews to SQLite. Recently, Simon worked on Django SQL Dashboard, which brings a subset of Datasette to Django.
-
I made a regexp cheatsheet for grep, sed, awk and highlighted differences between them
And sometimes it's nice to throw csv files into a database. You can do that with https://github.com/simonw/csvs-to-sqlite
db-benchmark
- Database-Like Ops Benchmark
-
Polars
Real-world performance is complicated since data science covers a lot of use cases.
If you're just reading a small CSV to do analysis on it, then there will be no human-perceptible difference between Polars and Pandas. If you're reading a larger CSV with 100k rows, there still won't be much of a perceptible difference.
Per this (old) benchmark, there are differences once you get into 500MB+ territory: https://h2oai.github.io/db-benchmark/
-
DuckDB performance improvements with the latest release
I do think it was important for duckdb to put out a new version of the results as the earlier version of that benchmark [1] went dormant with a very old version of duckdb with very bad performance, especially against polars.
[1] https://h2oai.github.io/db-benchmark/
-
Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
https://news.ycombinator.com/item?id=33270638 :
> Apache Ballista and Polars do Apache Arrow and SIMD.
> The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *
LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database
/? inurl:awesome site:github.com "vector database"
-
Pandas vs. Julia โ cheat sheet and comparison
I agree with your conclusion but want to add that switching from Julia may not make sense either.
According to these benchmarks: https://h2oai.github.io/db-benchmark/, DF.jl is the fastest library for some things, data.table for others, polars for others. Which is fastest depends on the query and whether it takes advantage of the features/properties of each.
For what it's worth, data.table is my favourite to use and I believe it has the nicest ergonomics of the three I spoke about.
-
Any faster Python alternatives?
Same. Numba does wonders for me in most scenarios. Yesterday I've discovered pola-rs and looks like I will add it to the stack. It's API is similar to pandas. Have a look at the benchmarks of cuDF, spark, dask, pandas compared to it: Benchmarks
-
Pandas 2.0 (with pyarrow) vs Pandas 1.3 - Performance comparison
The syntax has similarities with dplyr in terms of the way you chain operations, and itโs around an order of magnitude faster than pandas and dplyr (thereโs a nice benchmark here). Itโs also more memory-efficient and can handle larger-than-memory datasets via streaming if needed.
-
Pandas v2.0 Released
If interested in benchmarks comparing different dataframe implementations, here is one:
https://h2oai.github.io/db-benchmark/
- Database-like ops benchmark
-
Python "programmers" when I show them how much faster their naive code runs when translated to C++ (this is a joke, I love python)
Bad examples. Both numpy and pandas are notoriously un-optimized packages, losing handily to pretty much all their competitors (R, Julia, kdb+, vaex, polars). See https://h2oai.github.io/db-benchmark/ for a partial comparison.
What are some alternatives?
octosql - OctoSQL is a query tool that allows you to join, analyse and transform data from multiple databases and file formats using SQL.
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
textql - Execute SQL against structured text like CSV or TSV
datafusion - Apache DataFusion SQL Query Engine
sqlitebrowser - Official home of the DB Browser for SQLite (DB4S) project. Previously known as "SQLite Database Browser" and "Database Browser for SQLite". Website at:
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
datasette - An open source multi-tool for exploring and publishing data
databend - ๐๐ฎ๐๐ฎ, ๐๐ป๐ฎ๐น๐๐๐ถ๐ฐ๐ & ๐๐. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
sqlite-utils - Python CLI utility and library for manipulating SQLite databases
sktime - A unified framework for machine learning with time series
datasette-graphql - Datasette plugin providing an automatic GraphQL API for your SQLite databases
DataFramesMeta.jl - Metaprogramming tools for DataFrames