collapse
db-benchmark
Our great sponsors
collapse | db-benchmark | |
---|---|---|
2 | 91 | |
599 | 319 | |
- | 0.9% | |
9.6 | 0.0 | |
4 days ago | 10 months ago | |
C | R | |
GNU General Public License v3.0 or later | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
collapse
-
is there a package using data.table that provides functions for descriptive stats, missingness etc?
The ask is a little unclear. You might be interested in collapse and more generally in other packages in the fastverse. I guess it's also worth pointing out that data.table already provides alternative methods for certain base R descriptive stats functions (e.g., mean, etc.) that are automatically used when applied to datatables.
-
Benchmarking for loops vs apply and others
If you are looking for performance I would recommend to check the collapse package. The following line "collapse" = collapse::fsum(df_datatable$x, g=df_datatable$g) is around 2x faster than base::rowsum, and the dplyr style syntax doesn't add that much of an overhead "collapse dplyr" = df_datatable |> fgroup_by(g) |> fsum(x)
db-benchmark
- Database-Like Ops Benchmark
-
Polars
Real-world performance is complicated since data science covers a lot of use cases.
If you're just reading a small CSV to do analysis on it, then there will be no human-perceptible difference between Polars and Pandas. If you're reading a larger CSV with 100k rows, there still won't be much of a perceptible difference.
Per this (old) benchmark, there are differences once you get into 500MB+ territory: https://h2oai.github.io/db-benchmark/
-
DuckDB performance improvements with the latest release
I do think it was important for duckdb to put out a new version of the results as the earlier version of that benchmark [1] went dormant with a very old version of duckdb with very bad performance, especially against polars.
[1] https://h2oai.github.io/db-benchmark/
-
Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
https://news.ycombinator.com/item?id=33270638 :
> Apache Ballista and Polars do Apache Arrow and SIMD.
> The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *
LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database
/? inurl:awesome site:github.com "vector database"
-
Pandas vs. Julia โ cheat sheet and comparison
I agree with your conclusion but want to add that switching from Julia may not make sense either.
According to these benchmarks: https://h2oai.github.io/db-benchmark/, DF.jl is the fastest library for some things, data.table for others, polars for others. Which is fastest depends on the query and whether it takes advantage of the features/properties of each.
For what it's worth, data.table is my favourite to use and I believe it has the nicest ergonomics of the three I spoke about.
-
Any faster Python alternatives?
Same. Numba does wonders for me in most scenarios. Yesterday I've discovered pola-rs and looks like I will add it to the stack. It's API is similar to pandas. Have a look at the benchmarks of cuDF, spark, dask, pandas compared to it: Benchmarks
-
Pandas 2.0 (with pyarrow) vs Pandas 1.3 - Performance comparison
The syntax has similarities with dplyr in terms of the way you chain operations, and itโs around an order of magnitude faster than pandas and dplyr (thereโs a nice benchmark here). Itโs also more memory-efficient and can handle larger-than-memory datasets via streaming if needed.
-
Pandas v2.0 Released
If interested in benchmarks comparing different dataframe implementations, here is one:
https://h2oai.github.io/db-benchmark/
- Database-like ops benchmark
-
Python "programmers" when I show them how much faster their naive code runs when translated to C++ (this is a joke, I love python)
Bad examples. Both numpy and pandas are notoriously un-optimized packages, losing handily to pretty much all their competitors (R, Julia, kdb+, vaex, polars). See https://h2oai.github.io/db-benchmark/ for a partial comparison.
What are some alternatives?
fastverse - An Extensible Suite of High-Performance and Low-Dependency Packages for Statistical Computing and Data Manipulation in R
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
writexl - Portable, light-weight data frame to xlsx exporter for R
arrow-datafusion - Apache DataFusion SQL Query Engine
epanet2toolkit - An R package for calling the Epanet software for simulation of piping networks.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
priceR - Economics and Pricing in R
databend - ๐๐ฎ๐๐ฎ, ๐๐ป๐ฎ๐น๐๐๐ถ๐ฐ๐ & ๐๐. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
bruceR - ๐ฆ BRoadly Useful Convenient and Efficient R functions that BRing Users Concise and Elegant R data analyses.
DataFramesMeta.jl - Metaprogramming tools for DataFrames
tableone - R package to create "Table 1", description of baseline characteristics with or without propensity score weighting
sktime - A unified framework for machine learning with time series