The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Top 7 C++ Statistic Projects
-
Vince's CSV Parser
A modern C++ library for reading, writing, and analyzing CSV (and similar) files. (by vincentlaucsb)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
jasp-desktop
JASP aims to be a complete statistical package for both Bayesian and Frequentist statistical methods, that is easy to use and familiar to users of SPSS
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Project mention: If you can't reproduce the model then it's not open-source | news.ycombinator.com | 2024-01-17I think the process of data acquisition isn't so clear-cut. Take CERN as an example: they release loads of data from various experiments under the CC0 license [1]. This isn't just a few small datasets for classroom use; we're talking big-league data, like the entire first run data from LHCb [2].
On their portal, they don't just dump the data and leave you to it. They've got guides on analysis and the necessary tools (mostly open source stuff like ROOT [3] and even VMs). This means anyone can dive in. You could potentially discover something new or build on existing experiment analyses. This setup, with open data and tools, ticks the boxes for reproducibility. But does it mean people need to recreate the data themselves?
Ideally, yeah, but realistically, while you could theoretically rebuild the LHC (since most technical details are public), it would take an army of skilled people, billions of dollars, and years to do it.
This contrasts with open source models, where you can retrain models using data to get the weights. But getting hold of the data and the cost to reproduce the weights is usually prohibitive. I get that CERN's approach might seem to counter this, but remember, they're not releasing raw data (which is mostly noise), but a more refined version. Try downloading several petabytes of raw data if not; good luck with that. But for training something like a LLM, you might need the whole dataset, which in many cases have its own problems with copyrights…etc.
[1] https://opendata.cern.ch/docs/terms-of-use
[2] https://opendata.cern.ch/docs/lhcb-releases-entire-run1-data...
[3] https://root.cern/
If you're going to use the VineCopula R package its manual is here. However that package is no longer maintained, so as the other commenter noted, rvinecopulib is also an option (from the same author) and its documentation is here and here.
C++ Statistics related posts
- JASP – A Fresh Way to Do Statistics
- Analytics: Hacker News v.s. a tweet from Elon Musk
- Highly used R packages with no Python equivalent
- How to self host a Privacy respecting analytics solution?
- Volesti – High dimensional sampling and volume computation
-
A note from our sponsor - WorkOS
workos.com | 27 Apr 2024
Index
What are some of the best open-source Statistic projects in C++? This list will help you:
Project | Stars | |
---|---|---|
1 | root | 2,418 |
2 | Vince's CSV Parser | 824 |
3 | jasp-desktop | 724 |
4 | volesti | 139 |
5 | rvinecopulib | 33 |
6 | Statistic | 26 |
7 | vstat | 16 |
Sponsored