seldon-core
conductor
seldon-core | conductor | |
---|---|---|
14 | 39 | |
4,220 | 12,999 | |
1.2% | - | |
7.6 | 8.4 | |
5 days ago | 5 months ago | |
HTML | Java | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seldon-core
-
seldon-core VS MLDrop - a user suggested alternative
2 projects | 20 Feb 2023
-
[D] Feedback on a worked Continuous Deployment Example (CI/CD/CT)
ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. Built for data scientists, it has a simple, flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions that are catered towards ML workflows. Seldon Core is a production grade open source model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
-
[D] BentoML's Compatibility with Seldon;
I am using BentoML to build the docker container for a BERT model, and then deploy that using Seldon on GKE. The model's REST API endpoint works fine. at terms of compatibility with Seldon, the metrics are being scraped by Prometheus and visualized on Grafana. The only Seldon component that doesn't appear to be working is the request logging, which I have working for other applications that were deployed on Seldon. I am using the elastic stack from here. From my understanding, request logging should still be compatible and the ⠀only lost functionality should be Seldon's model metadata. Any insight on how to get the centralized request logging working? No errors were shown; it's just that the logs aren't being captured and sent to ElasticSearch. Anyone have any success using BentoML with Seldon and not losing any of Seldon's features?
-
Building a Responsible AI Solution - Principles into Practice
While tools in the model experimentation space normally include diagnostic charts on a model's performance, there are also specialised solutions that help ensure that the deployed model continues to perform as they are expected to. This includes the likes of seldon-core, why-labs and fiddler.ai.
-
Ask HN: Who is hiring? (January 2022)
Seldon | Multiple positions | London/Cambridge UK | Onsite/Remote | Full time | seldon.io
At Seldon we are building industry leading solutions for deploying, monitoring, and explaining machine learning models. We are an open-core company with several successful open source projects like:
* https://github.com/SeldonIO/seldon-core
* https://github.com/SeldonIO/mlserver
* https://github.com/SeldonIO/alibi
* https://github.com/SeldonIO/alibi-detect
* https://github.com/SeldonIO/tempo
We are hiring for a range of positions, including software engineers(go, k8s), ml engineers (python, go), frontend engineers (js), UX designer, and product managers. All open positions can be found at https://www.seldon.io/careers/
- Ask HN: Who is hiring? (December 2021)
-
Has anyone implemented Seldon?
Also note our github repo has a link to our slack where you can ask active users: https://github.com/SeldonIO/seldon-core
-
[Discussion] Look for service to upload a model and receive a REST API endpoint, for serving predictions
If you want to serve your model at scale, with a bunch of production features you should have a look at the open-source framework Seldon Core. It does what you're asking for plus a bunch of other cool stuff like routing, logging and monitoring.
- Seldon Core : Open-source platform for rapidly deploying machine learning models on Kubernetes
-
Looking for open-source model serving framework with dashboard for test data quality
Seldon ticks most of those boxes if you already have some experience with kubernetes. You can set up a/b tests, do payload logging to elastic and then do monitoring on top of that, and it has drift detection and model explainer modules too. Idk about great expectations integration, but you could probably do something with a custom transformer module as part of the inference graph.
conductor
- Netflix Conductor OSS discontinued support
-
Orkes Monthly Highlights - October 2023
We celebrated a remarkable milestone in September when the Netflix Conductor GitHub repository reached 10k stars. It was a momentous achievement for our DevRel team. Just a month later, we're thrilled to announce that we've surpassed 12k stars! ⭐🎉
-
4 Microservice Patterns Crucial in Microservices Architecture
Also, don’t forget to give us a ⭐ on our Netflix Conductor repo.
-
The Workflow Pattern
One of my favorite workflow engines that has a really simple way to do things was not listed here, so I'll call it out - Netflix Conductor (https://github.com/Netflix/conductor).
Its capabilities comes to light when you model really complex workflows and one real value is how its all very visual not just during modeling but when running it. The history remains visible and you can even see how the whole flow evolved.
-
Orkes Monthly Highlights - September 2023
Yet another significant milestone on our journey: we've proudly reached the 10,000-star mark on our Netflix Conductor GitHub repository! 🌟
-
question about microservice to microservice internal only communication
Give something like https://github.com/Netflix/conductor a try to solve this -- makes it very easy to do what you are trying to achieve.
- Framework used by Netflix to orchestrate microservices
-
Background Task Management on Celery and EC2
Checkout Conductor https://github.com/Netflix/conductor which is far more scalable and easy on the resources with its own Celery like queues. Fully supports writing task workers in python:
- Implementing Saga Pattern in Go Microservices
- GitHub - Netflix/conductor: Microservices orchestration engine.
What are some alternatives?
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
camunda-demo - 🗞️ Repo for this series: https://dev.to/tgotwig/getting-started-with-camunda-spring-boot-2gbi
MLServer - An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
Activiti - Activiti is a light-weight workflow and Business Process Management (BPM) Platform targeted at business people, developers and system admins. Its core is a super-fast and rock-solid BPMN 2 process engine for Java. It's open-source and distributed under the Apache license. Activiti runs in any Java application, on a server, on a cluster or in the cloud. It integrates perfectly with Spring, it is extremely lightweight and based on simple concepts.
evidently - Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
kestra - Infinitely scalable, event-driven, language-agnostic orchestration and scheduling platform to manage millions of workflows declaratively in code.
great_expectations - Always know what to expect from your data.
proposals - Temporal proposals
alibi-detect - Algorithms for outlier, adversarial and drift detection
akhq - Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more...
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Springy-Store-Microservices - Springy Store is a conceptual simple μServices-based project using the latest cutting-edge technologies, to demonstrate how the Store services are created to be a cloud-native and 12-factor app agnostic. Those μServices are developed based on Spring Boot & Cloud framework that implements cloud-native intuitive, design patterns, and best practices.