Our great sponsors
-
MLServer
An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Ever trained a new model and just wanted to use it through an API straight away? Sometimes you don't want to bother writing Flask code or containerizing your model and running it in Docker. If that sounds like you, you definitely want to check out MLServer. It's a python based inference server that recently went GA and what's really neat about it is that it's a highly-performant server designed for production environments too. That means that, by serving models locally, you are running in the exact same environment as they will be in when they get to production.
All of the code from this example can be found here.
For MLFlow users you can now serve models directly in MLFlow using MLServer and if you're a Kubernetes user you should definitely check out Seldon Core - an open source tool that deploys models to Kubernetes (it uses MLServer under the covers).
For MLFlow users you can now serve models directly in MLFlow using MLServer and if you're a Kubernetes user you should definitely check out Seldon Core - an open source tool that deploys models to Kubernetes (it uses MLServer under the covers).
Related posts
- Why bad scientific code beats code following "best practices"
- Explain me how websites like Dall-E, chatgpt, thispersondoesntexit process the user data so quickly
- [D] What licensed software do you use for machine learning experimentation tracking?
- [Q] Is there a tool to keep track of my ML experiments?
- Remote file access vulnerability in `mlflow server` and `mlflow ui` CLIs