eurybia
shapash
Our great sponsors
eurybia | shapash | |
---|---|---|
3 | 8 | |
203 | 2,642 | |
3.0% | 1.3% | |
5.1 | 8.6 | |
about 1 month ago | 29 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
eurybia
-
State of the Art data drift libraries on Python?
Try out eurybia, from the author of shapash which is a brilliant library as well.
-
Providing ML team with data: normalized or denormalized?
Your data scientists will cook up ugly bits of code to prepare their training data, you'll probably have to rewrite that when they want to ship to prod and also detect and handle discrepancies. In that regard, it sounds like you may enjoy Eurybia to communicate about this data with your data scientists. We made it precisely for that.
-
Advice on a Data Quality framework
So we just trained a model to try and do the same, and then sort of read its entrails through Shapash. The more it can tell the difference, the more your data has changed. We can know which variable has changed the most, and how much it's important to our models. If all else fails (and also if all else works), we can still know (again, this is all quantified in some way, we need numbers, not eyeballings) how much our models predictions have evolved over time, independantly of particular data changes, legit or not. How can our models predictions change if the data is all clean, you ask ? I mean I asked, but you would have too, in my shoes. What lies beyond data engineering ? What is the meaning of life ? The answer is concept drift, and that's where we're starting to work on now that we have a good grasp on data drift. Anyways, the tool is Eurybia. If any part of my ramblings resemble some of your work, please give it a try and chat us up here or through the repo, we are of course very eager to get feedbacks and possibly even contributions, who knows. See ya !
shapash
- GitHub - MAIF/shapash: Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
- [D] DL Practitioners, Do You Use Layer Visualization Tools s.a GradCam in Your Process?
-
This A.I.-generated artwork, Théâtre D'opéra Spatial, won first place at an art competition, and the art community isn't happy about it
There's work being done in that regard (like this python module), but as far as I know it's very clearly statistical guesstimates, and though it "works", the mathematical foundations are still somewhat shaky. There are heuristics in there we can't get rid of for now. But it's still better than nothing. Waaaaaay better than nothing.
-
Hacker News top posts: Jun 14, 2022
Shapash – Python library to make machine learning interpretable\ (4 comments)
- Shapash – Python library to make machine learning interpretable
-
State of the Art data drift libraries on Python?
Try out eurybia, from the author of shapash which is a brilliant library as well.
-
[P] It Is Now Possible To Generate a Model Audit Report with Shapash
With the new version of Shapash that is now available, you can document each model you release into production. Within a few lines of code, you can include in an HTML report all the information about your model (and its associated performance), the data it uses, its learning strategy, … this report is designed to be easily shared with a Data Protection Officer, an internal audit department, a risk control department, a compliance department, or anyone who wants to understand his work.
- [D] Has anyone ever used the SHAP and LIME models in machine learning?
What are some alternatives?
evidently - Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
shap - A game theoretic approach to explain the output of any machine learning model.
nannyml - nannyml: post-deployment data science in python
interpret - Fit interpretable models. Explain blackbox machine learning.
TensorFlow-Examples - TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.
GlassCode - This plugin allows you to make JetBrains IDEs to be fully transparent while keeping the code sharp and bright.
ML-For-Beginners - 12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
trulens - Evaluation and Tracking for LLM Experiments
CARLA - CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.