evidently
django-ninja
Our great sponsors
evidently | django-ninja | |
---|---|---|
10 | 70 | |
4,644 | 6,197 | |
4.4% | - | |
9.5 | 9.1 | |
1 day ago | 4 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
evidently
-
[P] Free open-source ML observability course: starts October 16 🚀
Hi everyone, I’m one of the creators of Evidently, an open-source (Apache 2.0) tool for production ML monitoring. We’ve just launched a free open course on ML observability that I wanted to share with the community.
-
Free Open-source ML observability course
Evidently itself is an open-source ML monitoring tool with 3m+ downloads so it's fairly popular https://github.com/evidentlyai/evidently. The course will show it but also other OSS tools like Mlflow and Grafana.
Disclaimer: I am one of the people working on Evidently.
-
Batch ML deployment and monitoring blueprint using open-source
Repo:https://github.com/evidentlyai/evidently/tree/main/examples/integrations/postgres_grafana_batch_monitoring
- Looking for recommendations to monitor / detect data drifts over time
- evidently: Evaluate and monitor ML models from validation to production
-
State of the Art data drift libraries on Python?
Thank you for your answer. I'm trying it today and the the other libraries mentioned + https://github.com/evidentlyai/evidently
-
Package for drift detection
evidently: https://github.com/evidentlyai/evidently
-
The hand-picked selection of the best Python libraries released in 2021
Evidently.
-
[D] 5 considerations for Deploying Machine Learning Models in Production – what did I miss?
Consideration Number #5: For model observability look to Evidently.ai, Arize.ai, Arthur.ai, Fiddler.ai, Valohai.com, or whylabs.ai.
-
Launch HN: Evidently AI (YC S21) – Track and Debug ML Models in Production
Hi HN, we are Evidently AI http://evidentlyai.com. We're building monitoring for machine learning models in production. The tool is open source and available on GitHub: https://github.com/evidentlyai/evidently. You can use it locally in a Jupyter notebook or in a Bash shell. There’s a video showing how it works in Jupyter here: https://www.youtube.com/watch?v=NPtTKYxm524.
Machine learning models can stop working as expected, often for non-obvious reasons. If this happens to a marketing personalization model, you might spam your customers by mistake. If this happens to credit scoring models, you might face legal and reputational risks. And so on. To catch issues with the model, it is not enough to just look at service metrics like latency. You have to track data quality, data drift (did the inputs change too much?), underperforming segments (does the model fail only for users in a certain region?), model metrics (accuracy, ROC AUC, mean error, etc.), etc.
Emeli and I have been friends for many years. We first met when we both worked at Yandex (the company behind CatBoost and ClickHouse). We worked on creating ML systems for large enterprises. We then co-founded a startup focused on ML for manufacturing. Overall we've worked on more than 50 real-world ML projects, from e-commerce recommendations to steel production optimization. We faced the monitoring problem on our own when we put models in production and had to create and build custom dashboards. Emeli is also an ML instructor on Coursera (co-author of the most popular ML course in Russian) and a number of offline courses. She knows first-hand how many data scientists try to repeatedly implement the same things over and over. There is no reason why everyone should have to build their own version of something like drift detection.
We spent a couple of months talking to ML teams from different industries. We learned that there are no good, standard solutions for model monitoring. Some quoted us horror stories about broken models left unnoticed which led to $100K+ in losses. Others showed us home-grown dashboards and complained they are hard to maintain. Some said they simply have a recurring task to look at the logs once per month, and often catch the issues late. It is surprising how often models are not monitored until the first failure. We spoke to many teams who said that only after the first breakdown they started to think about monitoring. Some never do, and failures go undetected.
If you want to calculate a couple of performance metrics on top of your data, it is easy to do ad hoc. But if you want to have stable visibility into different models, you need to consider edge cases, choose the right statistical tests and implement them, design visuals, define thresholds for alerts etc. That is a harder problem that combines statistics and engineering. Beyond that, monitoring often involves sharing the results with different teams: from domain experts to developers. In practice, data scientists often end up sharing screenshots of their plots and sending files here and there. Building a maintainable software system that supports these workflows is a project in itself, and machine learning teams usually do not have time or resources for it.
Since there is no standard open-source solution, we decided to build one. We want to automate as much as possible to help people focus on the modeling work that matters, not boilerplate code.
Our main tool is an open-source Python library that generates interactive reports on ML model performance. To get it, you need to provide the model logs (input features, prediction, and ground truth if available) and reference data (usually from training). Then you choose the report type and we generate a set of dashboards. We have pre-built several reports to detect things like data drift, prediction drift, visualize performance metrics, and help understand where the model makes errors. We can display these in a Jupyter notebook or HTML. We can also generate a JSON profile instead of a report. You can then integrate this output with any external tool (like Grafana) and build a workflow you want to trigger retraining or alerts.
Under the hood, we perform the needed calculations (e.g. Kolmogorov Smirnov or Chi-Squared test to detect drift) and generate multiple interactive tables and plots (using Plotly on the backend). Right now it works with tabular data only. In the future, we plan to add more data types, reports and make it easier to customize metrics. Our goal is to make it dead easy to understand all aspects of model performance and monitor them.
We differ from other approaches in a couple of ways. There are end-to-end ML platforms on the market that include monitoring features. These work for teams who are ready to trade flexibility in order to have an all-in-one tool. But most teams we spoke to have custom needs and prefer to build their own platform from open components. We want to create a tool that does one thing well and is easy to integrate with whatever stack you use. There are also some proprietary ML monitoring solutions on the market, but we believe that tools like these should be open, transparent, and available for self-hosting. That is why we are building it as open source.
We launched under Apache 2.0 license so that everyone can use the tool. For now, our focus is to get adoption for the open-source project. We don’t plan to charge individual users or small teams. We believe that the open-source project should remain open and be highly valuable. Later on, we plan to make money by providing a hosted cloud version for teams that do not want to run it themselves. We're also considering an open-core business model where we charge for features that large companies care about like single sign-on, security and audits.
If you work in tech companies, you might think that many ML infra problems are already solved. But in more traditional industries like manufacturing, retail, finance, etc., ML is just hitting adoption. Their ML needs and environment are often very different due to legacy IT systems, regulations, and types of use cases they work with. Now that many move from ML proof-of-concept projects to production, they will need the tools to help run the models reliably.
We are super excited to share this early release, and we’d love if you could give it a try: https://github.com/evidentlyai/evidently. If you run models in production - let us know how you monitor them and if anything is missing. If you need some help to test the tool - happy to chat! We want to build this open-source project together with the community, and it is very important for us to hear your thoughts and feedback.
django-ninja
-
Ask HN: What Underrated Open Source Project Deserves More Recognition?
Django Ninja [1], it forever changed how I write Django project, in a way so elegant and productive.
[1]: https://django-ninja.dev/
- Django Ninja is a web framework for building APIs with Django
-
UtilMeta Python Framework VS django-ninja - a user suggested alternative
2 projects | 3 Feb 2024
Django Ninja is a RESTful wrapper for Django, while UtilMeta Python Framework uses a more concise declarative ORM Schema for Django and other future-supporting ORMs like sqlachemy and Peewee to build RESTful APIs more efficiently, and supports not only Django but all Python mainstream frameworks like Django, Flask, Starlette, FastAPI, Sanic, Tornado, etc.
- Django Ninja
-
Ask HN: What Python libraries do you wish more people knew about?
I can't recommend [django-ninja](https://github.com/vitalik/django-ninja) enough. It's an easy to use, extremely fast, typed API for django. I've found it to be better in almost all aspects when compared to djangorestframework.
It's gaining popularity but is still widely unknown.
-
Building a Blog in Django
> The only place I really see Django at large companies is as an api using DRF or something.
This is not a bad thing. Using Django as an API backend is amazingly fast in terms of development time, especially with modern frameworks such as django-ninja [1].
Just use the built-in ORM to create models, write your endpoints, and use the built-in admin interface to play with the database if you don't have endpoints for everything.
There is also a less known feature of Django called admindocs [2], which automatically generates a human readable, hyperlinked documentation for your models and relations between them.
[1] https://django-ninja.rest-framework.com/
[2] https://docs.djangoproject.com/en/4.2/ref/contrib/admin/admi...
-
Learning Django
Personally, I also prefer django-ninja to DRF.
-
Why I chose django-ninja instead of django-rest-framework to build my project
Actually that's not fully true. If you mix async and sync codes in django-ninja there will be some errors. Where's the proof ? django-ninja doesn't support async auth
-
Built This GPT-Powered Document Search and Question Answering App with Django
Subscribe to this issue :D
-
Django 4.2 released
Also recommend Django-Ninja. It basically reimplements fastapi's type and decorator-based API construction, but embedded directly in django so you have access to django's ORM and middleware library.
What are some alternatives?
great_expectations - Always know what to expect from your data.
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
django-rest-framework - Web APIs for Django. 🎸
MLflow - Open source platform for the machine learning lifecycle
fastapi-admin - A fast admin dashboard based on FastAPI and TortoiseORM with tabler ui, inspired by Django admin
whylogs - An open-source data logging library for machine learning models and data pipelines. 📚 Provides visibility into data quality & model performance over time. 🛡️ Supports privacy-preserving data collection, ensuring safety & robustness. 📈
drf-spectacular - Sane and flexible OpenAPI 3 schema generation for Django REST framework.
ydata-profiling - 1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
openapi-generator - OpenAPI Generator allows generation of API client libraries (SDK generation), server stubs, documentation and configuration automatically given an OpenAPI Spec (v2, v3)
dvc - 🦉 ML Experiments and Data Management with Git
cookiecutter-django - Cookiecutter Django is a framework for jumpstarting production-ready Django projects quickly.