db-benchmark
ojg
Our great sponsors
db-benchmark | ojg | |
---|---|---|
91 | 17 | |
319 | 794 | |
0.9% | - | |
0.0 | 7.0 | |
10 months ago | 13 days ago | |
R | Go | |
Mozilla Public License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
db-benchmark
- Database-Like Ops Benchmark
-
Polars
Real-world performance is complicated since data science covers a lot of use cases.
If you're just reading a small CSV to do analysis on it, then there will be no human-perceptible difference between Polars and Pandas. If you're reading a larger CSV with 100k rows, there still won't be much of a perceptible difference.
Per this (old) benchmark, there are differences once you get into 500MB+ territory: https://h2oai.github.io/db-benchmark/
-
DuckDB performance improvements with the latest release
I do think it was important for duckdb to put out a new version of the results as the earlier version of that benchmark [1] went dormant with a very old version of duckdb with very bad performance, especially against polars.
[1] https://h2oai.github.io/db-benchmark/
-
Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
https://news.ycombinator.com/item?id=33270638 :
> Apache Ballista and Polars do Apache Arrow and SIMD.
> The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *
LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database
/? inurl:awesome site:github.com "vector database"
-
Pandas vs. Julia โ cheat sheet and comparison
I agree with your conclusion but want to add that switching from Julia may not make sense either.
According to these benchmarks: https://h2oai.github.io/db-benchmark/, DF.jl is the fastest library for some things, data.table for others, polars for others. Which is fastest depends on the query and whether it takes advantage of the features/properties of each.
For what it's worth, data.table is my favourite to use and I believe it has the nicest ergonomics of the three I spoke about.
-
Any faster Python alternatives?
Same. Numba does wonders for me in most scenarios. Yesterday I've discovered pola-rs and looks like I will add it to the stack. It's API is similar to pandas. Have a look at the benchmarks of cuDF, spark, dask, pandas compared to it: Benchmarks
-
Pandas 2.0 (with pyarrow) vs Pandas 1.3 - Performance comparison
The syntax has similarities with dplyr in terms of the way you chain operations, and itโs around an order of magnitude faster than pandas and dplyr (thereโs a nice benchmark here). Itโs also more memory-efficient and can handle larger-than-memory datasets via streaming if needed.
-
Pandas v2.0 Released
If interested in benchmarks comparing different dataframe implementations, here is one:
https://h2oai.github.io/db-benchmark/
- Database-like ops benchmark
-
Python "programmers" when I show them how much faster their naive code runs when translated to C++ (this is a joke, I love python)
Bad examples. Both numpy and pandas are notoriously un-optimized packages, losing handily to pretty much all their competitors (R, Julia, kdb+, vaex, polars). See https://h2oai.github.io/db-benchmark/ for a partial comparison.
ojg
-
Interactive Examples for Learning Jq
I found Jq to be difficult to use which is why Oj, https://github.com/ohler55/ojg is based on JSONPath. There still are a lot of options but it only takes a couple of help screens to figure out what the options are.
-
Building a high performance JSON parser
You might want to take a look at https://github.com/ohler55/ojg. It takes a different approach with a single pass parser. There are some performance benchmarks included on the README.md landing page.
-
A Journey building a fast JSON parser and full JSONPath
I like the "Simple Encoding Notation" (SEN) of the underlying library: https://github.com/ohler55/ojg/blob/develop/sen.md
- Oj Is on Tap
- SEN: Simple Encoding Notation
- The fastest tool for querying large JSON files is written in Python (benchmark)
-
FX: An interactive alternative to jq to process JSON
Another alternative is the oj app (ojg/cmd/oj) which is part of https://github.com/ohler55/ojg. It relies on JSONPath for extraction and manipulation of JSON.
- Go 1.17 Release Notes
-
OjG now has a tokenizer that is almost 10 times faster than json.Decode
I promise to add more examples but in the mean time there are the test files. The one for Unmarshal is https://github.com/ohler55/ojg/blob/develop/oj/unmashall_test.go
- The Pretty JSON Revolution
What are some alternatives?
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
jsonparser - One of the fastest alternative JSON parser for Go that does not require schema
arrow-datafusion - Apache DataFusion SQL Query Engine
jsonic - All you need with JSON
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
fastjson - Fast JSON parser and validator for Go. No custom structs, no code generation, no reflection
databend - ๐๐ฎ๐๐ฎ, ๐๐ป๐ฎ๐น๐๐๐ถ๐ฐ๐ & ๐๐. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
ask - A Go package that provides a simple way of accessing nested properties in maps and slices.
DataFramesMeta.jl - Metaprogramming tools for DataFrames
jettison - Highly configurable, fast JSON encoder for Go
sktime - A unified framework for machine learning with time series
json2go - Create go type representation from json