seldon-core
alibi-detect
Our great sponsors
seldon-core | alibi-detect | |
---|---|---|
14 | 9 | |
4,212 | 2,082 | |
1.7% | 2.3% | |
7.8 | 7.6 | |
5 days ago | 10 days ago | |
HTML | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seldon-core
-
seldon-core VS MLDrop - a user suggested alternative
2 projects | 20 Feb 2023
-
[D] Feedback on a worked Continuous Deployment Example (CI/CD/CT)
ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. Built for data scientists, it has a simple, flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions that are catered towards ML workflows. Seldon Core is a production grade open source model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
-
[D] BentoML's Compatibility with Seldon;
I am using BentoML to build the docker container for a BERT model, and then deploy that using Seldon on GKE. The model's REST API endpoint works fine. at terms of compatibility with Seldon, the metrics are being scraped by Prometheus and visualized on Grafana. The only Seldon component that doesn't appear to be working is the request logging, which I have working for other applications that were deployed on Seldon. I am using the elastic stack from here. From my understanding, request logging should still be compatible and the ⠀only lost functionality should be Seldon's model metadata. Any insight on how to get the centralized request logging working? No errors were shown; it's just that the logs aren't being captured and sent to ElasticSearch. Anyone have any success using BentoML with Seldon and not losing any of Seldon's features?
-
Building a Responsible AI Solution - Principles into Practice
While tools in the model experimentation space normally include diagnostic charts on a model's performance, there are also specialised solutions that help ensure that the deployed model continues to perform as they are expected to. This includes the likes of seldon-core, why-labs and fiddler.ai.
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- Ask HN: Who is hiring? (December 2021)
-
Has anyone implemented Seldon?
Also note our github repo has a link to our slack where you can ask active users: https://github.com/SeldonIO/seldon-core
-
[Discussion] Look for service to upload a model and receive a REST API endpoint, for serving predictions
If you want to serve your model at scale, with a bunch of production features you should have a look at the open-source framework Seldon Core. It does what you're asking for plus a bunch of other cool stuff like routing, logging and monitoring.
- Seldon Core : Open-source platform for rapidly deploying machine learning models on Kubernetes
-
Looking for open-source model serving framework with dashboard for test data quality
Seldon ticks most of those boxes if you already have some experience with kubernetes. You can set up a/b tests, do payload logging to elastic and then do monitoring on top of that, and it has drift detection and model explainer modules too. Idk about great expectations integration, but you could probably do something with a custom transformer module as part of the inference graph.
alibi-detect
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Numerous tools exist for detecting anomalies in time series data, but Alibi Detect stood out to me, particularly for its capabilities and its compatibility with both TensorFlow and PyTorch backends.
- Looking for recommendations to monitor / detect data drifts over time
-
[D] Distributions to represent an Image Dataset
That is, to see whether a test image belongs in the distribution of the training images and to provide a routine for special cases. After a bit of reading Ive found that this is related to the field of drift detection in which I tried out alibi-detect . Whereby the training images are trained by an autoencoder and any subsequent drift will be flagged by the AE.
-
[D] Which statistical test would you use to detect drift in a dataset of images?
Wasserstein distance is not very suitable for drift detection on most problems given that the sample complexity (and estimation error) scales with O(n^(-1/d)) with n the number of instances (100k-10m in your case) and d the feature dimension (192 in your case). More interesting will be to use for instance a detector based on the maximum mean discrepancy (MMD) with estimation error of O(n^(-1/2)). Notice the absence of the feature dimension here. You can find scalable implementations in Alibi Detect (disclosure: I am a contributor): MMD docs, image example. We just added the KeOps backend for the MMD detector to scale and speed up the drift detector further, so if you install from master, you can leverage this backend and easily scale the detector to 1mn instances on e.g. 1 RTX2080Ti GPU. Check this example for more info.
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- What Machine Learning model monitoring tools can you recommend?
- Ask HN: Who is hiring? (December 2021)
-
[D] How do you deal with covariate shift and concept drift in production?
I work in this area and also contribute to outlier/drift detection library https://github.com/SeldonIO/alibi-detect. To tackle this type of problem, I would strongly encourage following a more principled, fundamentally (statistically) sound approach. So for instance measuring metrics such as the KL-divergence (or many other f-divergences) will not be that informative since it has a lot of undesirable properties for the problem at hand (in order to be informative requires already overlapping distributions P and Q, it is asymmetric, not a real distance metric, will not scale well with data dimensionality etc). So you should probably look at Integral Probability Metrics (IPMs) such as the Maximum Mean Discrepancy (MMD) instead which have much nicer behaviour to monitor drift. I highly recommend the Interpretable Comparison of Distributions and Models NeurIPS workshop talks for more in-depth background.
-
[D] Is this a reasonable assumption in machine learning?
All of the above functionality and more can be easily used under a simple API in https://github.com/SeldonIO/alibi-detect.
What are some alternatives?
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
pytorch-widedeep - A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch
MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
cleanlab - The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
evidently - Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
pyod - A Comprehensive and Scalable Python Library for Outlier Detection (Anomaly Detection)
great_expectations - Always know what to expect from your data.
river - 🌊 Online machine learning in Python
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Anomaly_Detection_Tuto - Anomaly detection tutorial on univariate time series with an auto-encoder
huggingface_hub - The official Python client for the Huggingface Hub.
conductor - Conductor is a microservices orchestration engine.