awesome-mlops VS kserve

Compare awesome-mlops vs kserve and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
awesome-mlops kserve
24 3
11,719 3,047
- 7.3%
4.9 9.4
about 2 months ago 5 days ago
Python
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

awesome-mlops

Posts with mentions or reviews of awesome-mlops. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-07.
  • MLOps
    1 project | news.ycombinator.com | 16 Apr 2023
  • ML Engineer Roadmap
    1 project | /r/datascience | 11 Apr 2023
    I'm in the same boat. Data scientist shifting towards ML engineering-MLOps. The guide seems quite complete. I am also doing the ML DevOps engineer, which has end-to-end projects and has been helpful so far. I also feel that most ML engineers will be Mlops too, as most companies will not distinguish between the two, so I try to focus on this part. Here is a quite comprehensive list of resources: https://github.com/visenger/awesome-mlops
  • Mlops roadmap
    3 projects | /r/mlops | 7 Apr 2023
    Good Reference: https://github.com/visenger/awesome-mlops (The Link above has so many Guides, It's insane) https://madewithml.com/
  • What do data scientists use Docker for?
    1 project | /r/datascience | 1 Apr 2023
  • Do you wonder why MLOps is not at the same level as DevOps?
    2 projects | /r/MLQuestions | 31 Mar 2023
    I recently did a deep-dive into MLOps for a client, and I've found that https://ml-ops.org/ offers a great overview. Some topics are a bit too "zoomed out", but they still touch on most important aspects of building an end-to-end product. I found it a great starting point when doing research, and picking and choosing some key points from each section + some google helped a lot. Give it a look, you'll probably find some useful things in there.
  • Can you guys explain to me what MLOps is?
    1 project | /r/dataengineering | 20 Mar 2023
  • MLOps on GitHub Actions with Cirun
    3 projects | dev.to | 29 Dec 2022
    MLOps
  • DevOps - where to begin?
    3 projects | /r/datascience | 16 Aug 2022
  • JBCNConf 2022: A great farewell
    6 projects | dev.to | 23 Jul 2022
    She made mentions to ML-Ops and MLFlow including Vertex AI the GCP implementation. I will post the video as soon as it is available. In the meantime, you can enjoy any other talk from Nerea Luis
  • Can Mechanical Engineers become MLOps?
    2 projects | /r/mlops | 25 Apr 2022
    From your post, you seem to be trained for data science for physics modeling, so I'd recommend to get started with https://ml-ops.org/ and for the data engineering part, I found this https://github.com/andkret/Cookbook open source cookbook to be invaluable.

kserve

Posts with mentions or reviews of kserve. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-14.
  • Show HN: Software for Remote GPU-over-IP
    6 projects | news.ycombinator.com | 14 Dec 2022
    Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.

    Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.

    So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.

    Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.

    Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.

    [0] - https://github.com/kserve/kserve

    [1] - https://github.com/triton-inference-server/server

    [2] - https://github.com/triton-inference-server/client

    [3] - https://github.com/triton-inference-server/model_analyzer

  • Run your first Kubeflow pipeline
    5 projects | dev.to | 20 Nov 2021
    Kubeflow has multiple components: central dashboard, Kubeflow Notebooks to manage Jupyter notebooks, Kubeflow Pipelines for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers, KF Serving for model serving (apparently superseded by KServe), Katib for hyperparameter tuning and model search, and training operators such as TFJob for training TF models on Kubernetes.
  • [D] Serverless solutions for GPU inference (if there's such a thing)
    2 projects | /r/MachineLearning | 22 Feb 2021
    If you can run on Kubernetes then KFServing is an open source solution that allows for GPU inference and is built upon Knative to allow scale to zero for GPU based inference. From release 0.5 it also has capabilities for multi-model serving as a alpha feature to allow multiple models to share the same server (and via NVIDIA Triton the same GPU).

What are some alternatives?

When comparing awesome-mlops and kserve you can also consider the following projects:

metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!

kubeflow - Machine Learning Toolkit for Kubernetes

Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.

aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads

Awesome-Federated-Learning - FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai

kind - Kubernetes IN Docker - local clusters for testing Kubernetes

applied-ml - 📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.

kubeflow-learn

awesome-mlops - :sunglasses: A curated list of awesome MLOps tools

Python-Schema-Matching - A python tool using XGboost and sentence-transformers to perform schema matching task on tables.

bodywork - ML pipeline orchestration and model deployments on Kubernetes.