HumesGuillotine
dowhy
HumesGuillotine | dowhy | |
---|---|---|
3 | 8 | |
1 | 6,781 | |
- | 1.7% | |
4.8 | 8.8 | |
about 1 month ago | 1 day ago | |
Python | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HumesGuillotine
-
Learning Universal Predictors
As the guy who suggested to Marcus a lossless compression prize to replace the Turing Test, I've got to confess that all this pedantic sophistry "critiquing" algorithmic information is there for a good reason. In the immortal words of Mel Brooks: "We've got to protect our phoney baloney jobs gentlemen!"
https://youtu.be/bpJNmkB36nE
There is actually more at stake here than machine learning. This gets to the root of "bias" in the scientific method. Imagine what horrors, what risks, what chaos would be ours if a truly objective information criterion for causal model selection were to exist! Why, virtually every "sociologist" would be hauled to Hume's Guillotine in a Reign of Terror!
https://github.com/jabowery/HumesGuillotine
But to be clear, Marcus and I have a disagreement about pragmatics of such an approach to dispute processing in the natural sciences. He believes, for example, that the dispute over climate change should be handled by the standard processes in place with academia. My approach differs, based on my hard won experience with reform reforming institutional incentives:
https://jimbowery.blogspot.com/2018/04/necessity-and-incenti...
When it comes to multi-trillion dollar scientific questions, the conflicts of interest become so intense that you really need to apply a gold standard for objectivity and that is the single number: How big is your executable archive of the data in evidence.
While I understand the machine learning world looms as a rival for "unbiased" academic research, it nevertheless remains true that even in this emerging "marketplace of ideas", there is no formal definition of "bias" that disciplines discourse and thereby guides development at the institutional, let alone technical level. Everyone is weighing in with their fuzzy notions of "bias" that betray intense motivations when there has been, for over 50 years, a very clear and present mathematical definition.
-
Elon Musk proposes that a new version of quantum mechanics/cosmology, will be derived, possibly by using his version of artificial intelligence "xAI".
See Hume's Guillotine at github for what Musk should be pursuing.
-
Market price of power, as produced by the Suncell will not be very low, for a long time
This is one of the reasons I've been advocating a philanthropic prize for macrosocial modeling: Ockham's Guillotine: Beheading the social pseudosciences.
dowhy
-
Causality for Machine Learning
I'm a fan of the Do Why library out of Microsoft. Even as a novice in the field of causal modeling it can get you up and running by estimating the causal graph based on your data. https://github.com/py-why/dowhy
-
Acceptable data formats for Predictive Stepwise Logistic Regression
considering how well understood the generating process is, causal analysis could potentially be very powerful here and would model the "not every possible combination of variables is represented" component extremely naturally. https://github.com/py-why/dowhy
-
Do you use any specific framework when it comes to causal inference?
The Do-why package could be useful.
-
Causal Explanations Considered Harmful: On the logical fallacy of causal projection
Here's one from Microsoft! https://github.com/py-why/dowhy
-
[Q] What are some of the most useful topics/classes in philosophy for Statistics?
Before those discussions, it's good to understand the very basics of the topic so you 1) demonstrate momentum to the prof, and 2) have the basis for a meaningful discussion. For causal reasoning, you can check out the Pearl book Causal inference in statistics, a primer, which is short and readable. Definitely check out the Do Why python package which has good tutorials and videos.
- [R] DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models
- DoWhy is a Python library for causal inference
What are some alternatives?
causalnex - A Python library that helps data scientists to infer causation rather than observing correlation.
Rath - Next generation of automated data exploratory analysis and visualization platform.
looper - A resource list for causality in statistics, data science and physics
pgmpy - Python Library for learning (Structure and Parameter), inference (Probabilistic and Causal), and simulations in Bayesian Networks.
causal-learn - Causal Discovery in Python. It also includes (conditional) independence tests and score functions.
causalgraph - A python package for modeling, persisting and visualizing causal graphs embedded in knowledge graphs.
Eliot - Eliot: the logging system that tells you *why* it happened
CausalPy - A Python package for causal inference in quasi-experimental settings
Causality