tidytable
root
tidytable | root | |
---|---|---|
26 | 33 | |
455 | 2,761 | |
1.1% | 1.9% | |
7.9 | 10.0 | |
about 2 months ago | 5 days ago | |
R | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tidytable
- Tidyverse 2.0.0
-
fuzzyjoin - "Error in which(m) : argument to 'which' is not logical"
If you need speed, you should consider using dtplyr (or tidytable), or even dbplyr with duckdb.
-
tidytable v0.10.0 is now on CRAN - use tidyverse-like syntax with data.table speed
What do you think of this instead?
-
Offering several functions to create the same object in my package
Here's an example - I use this in a package I've built called tidytable. Here is the as_tidytable() function I use that uses method dispatch.
-
Dplyr performance issues (Late 2022)
If you're having performance issues with dplyr you can also try out tidytable
-
R Dialects Broke Me
I’d say tidytable is a better option these days as it supports more functions. Although I think dtplyr has improved on this front recently, but still lags. The author of tidytable contributes to dtplyr as well.
-
Why is mlr3 so under-marketed?
I know you said it 'feels much faster' which isn't exactly a data oriented comparison, but tidymodels performs very well. You can see one of the dplyr functions as step_* in tidymodels, for example mutate vs. step_mutate under recipes library. The author of tidytable, which uses data.table, had some revisions due to this conversation, just as an example.
-
Why is {dplyr} so huge, and are there any alternatives or a {dplyr} 'lite' that I can use for the basic mutate, group_by, summarize, etc?
Tidytable is what you might be looking for: https://markfairbanks.github.io/tidytable/, this will require a bit of refactoring (e.g group-bys happen as arguments in summarise/mutate). You'll get data.table like speed in a very compact & complete package.
-
Programming with R {dplyr}
People can also use tidytable and keep the same workflow they're already used to 😄
- tidytable v0.8.1 is on CRAN - it also comes with a new logo! Need data.table speed with tidyverse syntax? Check out tidytable.
root
-
ICPP – Running C++ in anywhere like a script
Folks who like this kind of thing should definitely check out CERN's Root framework. I've been using its C++ interpreter in a Jupyter notebook environment to learn C++. It's probably also quite a bit more mature than this project. https://root.cern/
- CERN Root
-
If you can't reproduce the model then it's not open-source
I think the process of data acquisition isn't so clear-cut. Take CERN as an example: they release loads of data from various experiments under the CC0 license [1]. This isn't just a few small datasets for classroom use; we're talking big-league data, like the entire first run data from LHCb [2].
On their portal, they don't just dump the data and leave you to it. They've got guides on analysis and the necessary tools (mostly open source stuff like ROOT [3] and even VMs). This means anyone can dive in. You could potentially discover something new or build on existing experiment analyses. This setup, with open data and tools, ticks the boxes for reproducibility. But does it mean people need to recreate the data themselves?
Ideally, yeah, but realistically, while you could theoretically rebuild the LHC (since most technical details are public), it would take an army of skilled people, billions of dollars, and years to do it.
This contrasts with open source models, where you can retrain models using data to get the weights. But getting hold of the data and the cost to reproduce the weights is usually prohibitive. I get that CERN's approach might seem to counter this, but remember, they're not releasing raw data (which is mostly noise), but a more refined version. Try downloading several petabytes of raw data if not; good luck with that. But for training something like a LLM, you might need the whole dataset, which in many cases have its own problems with copyrights…etc.
[1] https://opendata.cern.ch/docs/terms-of-use
[2] https://opendata.cern.ch/docs/lhcb-releases-entire-run1-data...
[3] https://root.cern/
- What software is used to generate plots/graphs like this seen in many particle physics papers?
-
Interactive GCC (igcc) is a read-eval-print loop (REPL) for C/C++
The odd part is that this is not just for fun. For many physicists when I was at CERN, a C++ REPL was a commonly used tool to interactively debug analyses to such a degree that many never compiled their code. Back then, I believe, it was some custom implementation included in ROOT (https://root.cern/). I even went out of my way to write C++ code compatible to it just so it could run with this implementation, otherwise some colleagues weren't interested in collaborating at all.
-
Stable Diffusion in pure C/C++
That Python ML code is calling C++ code running in the GPU, one more reason to use C++ across the whole stack.
CERN already used prototyping in C++, with ROOT and CINT, 20 years ago.
https://root.cern/
Nowadays it is even usable from Netbooks via Xeus.
It is more a matter of lack of exposure to C++ interpreters than anything else.
- Root: Analyzing Petabytes of Data, Scientifically
-
Aliens might be waiting for humans to solve a puzzle
Quantum computing is a pretty interesting science too. https://home.cern/news/press-release/knowledge-sharing/cern-quantum-technology-initiative-unveils-strategic-roadmap they have to deal with lots of data streaming too https://root.cern/
-
cppyy Generated Wrappers and Type Annotations
I'm a user of CERN's ROOT (https://root.cern/) and while I'd usually write in C++, I've been trying to write as much Python as I can recently to get a bit better in the language.
- Root: Analyzing Petabytes of Scientific Data
What are some alternatives?
dtplyr - Data table backend for dplyr
xeus - Implementation of the Jupyter kernel protocol in C++
tidypolars - Tidy interface to polars
ips4o - In-place Parallel Super Scalar Samplesort (IPS⁴o)
Tidier.jl - Meta-package for data analysis in Julia, modeled after the R tidyverse.
hep - hep is the mono repository holding all of go-hep.org/x/hep packages and tools
box - Write reusable, composable and modular R code
PyMesh - Geometry Processing Library for Python
tidyr - Tidy Messy Data
apd - Arbitrary-precision decimals for Go
extendr - R extension library for rust designed to be familiar to R users.
OpenGL-Particle-Motion - This project simulates the motion of electrons and protons using Coulomb's Law. The simulation is visually represented on-screen using OpenGL.