targets
db-benchmark
targets | db-benchmark | |
---|---|---|
10 | 91 | |
869 | 320 | |
1.6% | 0.0% | |
9.6 | 0.0 | |
9 days ago | 10 months ago | |
R | R | |
GNU General Public License v3.0 or later | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
targets
-
Advice on Best Practices
Is this it https://github.com/ropensci/targets?
-
Does anyone else feel in a tricky spot about their use of R?
I'll chime in with others to say that using targets can help with the memory load as well. If you partition your data adequately (e.g. grouping by subjects), you can take advantage of the way targets maps data so it only loads what it needs to. Moreover, if you use the memory = "transient" option, it will unload objects between steps -- adding a little bit of time overhead but saving you on memory. targets and tidytable together have enabled me to work on pretty sizeable datasets while rarely running into memory issues. In fact, the only time I ran into a data memory hog was because I didn't adequately partition the data across worker nodes.
-
What are your favorite R Libraries?
targets
-
Is there a better way to update an entire series of scripts?
I highly recommend the holy grail of workflow orchestrators / executors in the R ecosystem: targets.
- The new Drake ropensci targets: Function-oriented Make-like declarative workflows for R {R}
-
How do you manage, distribute and schedule jobs written in R?
That said, you might want to check out the ‘targets’ package, which provides a DSL for specifying complex workflow descriptions in R. When repeatedly running the same jobs on changing data, this package helps ensure that only necessary work is performed (suitable intermediate results are reused), and scripts are run reproducibly. This might help with sceduling.
-
How do I do something like this as a parallel programming in R?
It may be worth it to put these individual steps into a targets pipeline. targets is designed to support parallelization with future and make it easier to visualize downstream dependencies.
-
Tips re: workflow, organization, file hygiene and similar?
Given your requirements, I recommend you check out ‘targets’, which specifically addresses the needs of reusable workflows in R, and it seems like it fits your requirements to a T.
-
Your impression of {targets}? (r package)
The targets package is the official successor to Drake, and has the same primary author (Will Landau). He has explained why he created targets, which includes stronger guardrails for users and better UX.
-
Data engineering with R?
I use it for ETL. I use targets as the workflow management software, and, like others, have a cron job set up to run nightly builds.
db-benchmark
- Database-Like Ops Benchmark
-
Polars
Real-world performance is complicated since data science covers a lot of use cases.
If you're just reading a small CSV to do analysis on it, then there will be no human-perceptible difference between Polars and Pandas. If you're reading a larger CSV with 100k rows, there still won't be much of a perceptible difference.
Per this (old) benchmark, there are differences once you get into 500MB+ territory: https://h2oai.github.io/db-benchmark/
-
DuckDB performance improvements with the latest release
I do think it was important for duckdb to put out a new version of the results as the earlier version of that benchmark [1] went dormant with a very old version of duckdb with very bad performance, especially against polars.
[1] https://h2oai.github.io/db-benchmark/
-
Show HN: SimSIMD vs. SciPy: How AVX-512 and SVE make SIMD cleaner and ML faster
https://news.ycombinator.com/item?id=33270638 :
> Apache Ballista and Polars do Apache Arrow and SIMD.
> The Polars homepage links to the "Database-like ops benchmark" of {Polars, data.table, DataFrames.jl, ClickHouse, cuDF, spark, (py)datatable, dplyr, pandas, dask, Arrow, DuckDB, Modin,} but not yet PostgresML? https://h2oai.github.io/db-benchmark/ *
LLM -> Vector database: https://en.wikipedia.org/wiki/Vector_database
/? inurl:awesome site:github.com "vector database"
-
Pandas vs. Julia – cheat sheet and comparison
I agree with your conclusion but want to add that switching from Julia may not make sense either.
According to these benchmarks: https://h2oai.github.io/db-benchmark/, DF.jl is the fastest library for some things, data.table for others, polars for others. Which is fastest depends on the query and whether it takes advantage of the features/properties of each.
For what it's worth, data.table is my favourite to use and I believe it has the nicest ergonomics of the three I spoke about.
-
Any faster Python alternatives?
Same. Numba does wonders for me in most scenarios. Yesterday I've discovered pola-rs and looks like I will add it to the stack. It's API is similar to pandas. Have a look at the benchmarks of cuDF, spark, dask, pandas compared to it: Benchmarks
-
Pandas 2.0 (with pyarrow) vs Pandas 1.3 - Performance comparison
The syntax has similarities with dplyr in terms of the way you chain operations, and it’s around an order of magnitude faster than pandas and dplyr (there’s a nice benchmark here). It’s also more memory-efficient and can handle larger-than-memory datasets via streaming if needed.
-
Pandas v2.0 Released
If interested in benchmarks comparing different dataframe implementations, here is one:
https://h2oai.github.io/db-benchmark/
- Database-like ops benchmark
-
Python "programmers" when I show them how much faster their naive code runs when translated to C++ (this is a joke, I love python)
Bad examples. Both numpy and pandas are notoriously un-optimized packages, losing handily to pretty much all their competitors (R, Julia, kdb+, vaex, polars). See https://h2oai.github.io/db-benchmark/ for a partial comparison.
What are some alternatives?
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
drake - An R-focused pipeline toolkit for reproducibility and high-performance computing
datafusion - Apache DataFusion SQL Query Engine
awesome-pipeline - A curated list of awesome pipeline toolkits inspired by Awesome Sysadmin
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
tidyverse - Easily install and load packages from the tidyverse
databend - 𝗗𝗮𝘁𝗮, 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 & 𝗔𝗜. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
fastverse - An Extensible Suite of High-Performance and Low-Dependency Packages for Statistical Computing and Data Manipulation in R
sktime - A unified framework for machine learning with time series
targets-tutorial - Short course on the targets R package
DataFramesMeta.jl - Metaprogramming tools for DataFrames