budgetml
experta
Our great sponsors
budgetml | experta | |
---|---|---|
4 | - | |
1,332 | 136 | |
0.2% | - | |
0.0 | 0.0 | |
2 months ago | 9 months ago | |
Python | Python | |
Apache License 2.0 | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
budgetml
experta
We haven't tracked posts mentioning experta yet.
Tracking mentions began in Dec 2020.
What are some alternatives?
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
clipspy - Python CFFI bindings for the 'C' Language Integrated Production System CLIPS
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
FedML - FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
astroid - A common base representation of python source code for pylint and other projects
ck - Collective Mind (CM) is a simple, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to compose, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data, software and hardware
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
fastapi-template - Completely Scalable FastAPI based template for Machine Learning, Deep Learning and any other software project which wants to use Fast API as an API framework.
emlearn - Machine Learning inference engine for Microcontrollers and Embedded devices
tritony - Tiny configuration for Triton Inference Server
filetype.py - Small, dependency-free, fast Python package to infer binary file types checking the magic numbers signature