budgetml
pinferencia
Our great sponsors
budgetml | pinferencia | |
---|---|---|
4 | 21 | |
1,331 | 556 | |
0.1% | 0.2% | |
0.0 | 0.0 | |
about 2 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
budgetml
pinferencia
-
Stop Writing Flask to Serve/Deploy Your Model: Pinferencia is Here
Check out at: underneathall/pinferencia: Python + Inference — Model Deployment library in Python. Simplest model inference server ever. (github.com)
Go visit: Pinferencia (underneathall.app) for detailed examples.
-
Google T5 Translation as a Service with Just 7 lines of Codes
**Pinferencia** makes it super easy to serve any model with just three extra lines.
-
what is the easiest way to deploy a nlp model?
Check this out https://github.com/underneathall/pinferencia
-
Popular Machine Learning Deployment Tools
GitHub
-
[D] Do you train and deploy models using just one framework or multiple frameworks at work?
Hi, I'm the creator of Pinferencia. Currently I'm design new features to-do list. I want to know:
-
[D] Kubernetes for ML - how are y'all doing it?
Current I use Pinferencia which is based on fastapi. In the past I have writen flask and fastapi wrappers to serve sklearn models.
-
Using Pydantic models (Basemodel) for model.predict using FastAPI (Python) getting error "value is not a valid dict"
If you're interested, detailed doc is at https://pinferencia.underneathall.app/
If you simply want to serve your model, you can also try a tool I created Pinferencia, it is built on FastAPI.
-
Serve machine learning models with Pinferencia
Github: Pinferencia - If you like it, give it a star.
What are some alternatives?
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
polyaxon - MLOps Tools For Managing & Orchestrating The Machine Learning LifeCycle
llmware - Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.
serving - A flexible, high-performance serving system for machine learning models
dslinter - `dslinter` is a pylint plugin for linting data science and machine learning code. We plan to support the following Python libraries: TensorFlow, PyTorch, Scikit-Learn, Pandas and NumPy.
serve - Serve, optimize and scale PyTorch models in production
pyro - Deep universal probabilistic programming with Python and PyTorch
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
fastapi-template - Completely Scalable FastAPI based template for Machine Learning, Deep Learning and any other software project which wants to use Fast API as an API framework.
papers-with-data - A curated list of papers that released datasets along with their work