MLOpsManufacturing
BentoML
MLOpsManufacturing | BentoML | |
---|---|---|
1 | 17 | |
19 | 7,262 | |
- | 1.0% | |
3.2 | 9.7 | |
about 1 year ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MLOpsManufacturing
-
Virtual Network architecture 1 - Do I need virtual network?
Our team is proud of contributing to open source software assets and Microsoft platform that are broadly available. In every project, we create reusable and sharable software assets that can be widely applicable with the agreement of the enterprise clients. Our team practices growth mindset by trying new things and learning from others, and then reuse the learnings and create shared software assets. One examle is Azure-Samples/MLOpsManufacturing created with learnings from multiple projects. As we work more engagements with more clients, more and more other developers can reuse the assets and do not need to spend months designing network security architectures.
BentoML
-
Recapping the AI, Machine Learning and Computer Meetup — August 15, 2024
As a data scientist/ML practitioner, how would you feel if you can independently iterate on your data science projects without ever worrying about operational overheads like deployment or containerization? Let’s find out by walking you through a sample project that helps you do so! We’ll combine Python, AWS, Metaflow and BentoML into a template/scaffolding project with sample code to train, serve, and deploy ML models…while making it easy to swap in other ML models.
-
Who's hiring developer advocates? (December 2023)
Link to GitHub -->
-
project ideas/advice for entry-level grad jobs?
there are a few tools you can use as "cheat mode" shortcuts to give you a leg up as you're getting started. here's one: https://github.com/bentoml/BentoML
-
Two high schoolers trying to use Azure/GCP/AWS- need help!
Then you can look into bentoml https://github.com/bentoml/BentoML which is used to deploy ml stuff with many more benifits.
- Ask HN: Who is hiring? (November 2022)
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
- PostgresML is 8-40x faster than Python HTTP microservices
- Congratulations on v1.0, BentoML 🍱 ! You are r/mlops OSS of the month!
-
Show HN: Truss – serve any ML model, anywhere, without boilerplate code
In this category I’m a big fan of https://github.com/bentoml/BentoML
What I like about it is their idiomatic developer experience. It reminds me of other Pythonic frameworks like Flask and Django in a good way.
I have no affiliation with them whatsoever, just an admirer.
-
[P] Introducing BentoML 1.0 - A faster way to ship your models to production
Github Page: https://github.com/bentoml/BentoML
What are some alternatives?
kicad-parts-placer - Auto place components into pcbnew from a centroid file. Useful for maintaining a common board form factor.
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
openvmp-parts-gobilda - OpenVMP parts that can be purchased from goBILDA
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
nlp-recipes - Natural Language Processing Best Practices & Examples
haystack - AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
ERPNext - Free and Open Source Enterprise Resource Planning (ERP)
clearml - ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
lightning-mlflow-hf - Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
kubeflow - Machine Learning Toolkit for Kubernetes
streamlit - Streamlit — A faster way to build and share data apps.