nannyml
pytest-visual
nannyml | pytest-visual | |
---|---|---|
7 | 1 | |
1,756 | 16 | |
2.3% | - | |
8.6 | 8.7 | |
2 days ago | 24 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nannyml
-
Introduction to NannyML: Model Evaluation without labels
In order to try to solve this issue, NannyML was created. NannyML is an open-source Python library designed in order to make it easy to monitor drift in the distributions of our model input variables and estimate our model performance (even without labels!) thanks to the Confidence-Based Performance Estimation algorithm they developed. But first of all, why do models need to be monitored and why their performance might vary over time?
- Detecting silent model failure. NannyML estimates performance for regression and classification models using tabular data. It alerts you when and why it changed. It is the only open-source library capable of fully capturing the impact of data drift on performance.
-
[D] Data drift is not a good indicator of model performance degradation
But I may have it haha. What we propose in the blog post instead of relying solely on data drift is using performance estimation methods (eg: https://github.com/NannyML) with them you can estimate the performance of the ml model without having access to ground truth.
-
[HIRING][Full Time, Part Time, Temporary, Internship, Freelance] Data Science Intern (Remote)
Description NannyML - creators of an Open Source Python library, are looking for multiple Data Science interns to help across research, prototyping, and product. Github: https://github.com/NannyML/nannyml About Us NannyML is an Open Source Python lib …
-
What do you think about Detecting Silent ML Failure with an Open Source Python library?
If you think this could add value to your daily life, check it out here: https://github.com/NannyML/nannyml.
-
Can I estimate the impact of data drift on performance?
I found it implemented here: https://github.com/NannyML/nannyml
- Show HN: OSS Python library for detecting silent ML model failure
pytest-visual
-
[P] Elevate Your ML Testing with pytest-visual
I’ve developed a tool called pytest-visual, aiming to make ML code testing more efficient and meaningful. Traditional unit testing often misses visual and functional aspects of ML workflows such as data augmentation and model structures.
What are some alternatives?
evidently - Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
chitra - A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
cuttle-cli - Cuttle automates the transformation of your Python notebook into deployment-ready projects (API, ML pipeline, or just a Python script)
torchview - torchview: visualize pytorch models
deep-significance - Enabling easy statistical significance testing for deep neural networks.
dvclive - 📈 Log and track ML metrics, parameters, models with Git and/or DVC
barfi - Python Flow Based Programming environment that provides a graphical programming environment.
tfgraphviz - A visualization tool to show a TensorFlow's graph like TensorBoard
ydata-profiling - 1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
receptive_field_analysis_toolbox - A toolbox for receptive field analysis and visualizing neural network architectures
eurybia - âš“ Eurybia monitors model drift over time and securizes model deployment with data validation
tf-explain - Interpretability Methods for tf.keras models with Tensorflow 2.x