serving-compare-middleware
nni
Our great sponsors
serving-compare-middleware | nni | |
---|---|---|
1 | 5 | |
14 | 13,742 | |
- | 1.0% | |
0.0 | 6.7 | |
10 months ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
serving-compare-middleware
-
A Quantitative Comparison of Serving Platforms for Neural Networks
For this experiment we ran the models (respectively servings) using Docker Compose. You can find the relevant manifests here: https://github.com/Biano-AI/serving-compare-middleware/blob/master/docker-compose.test.yml
nni
- Filter Pruning for PyTorch
-
Automated Machine Learning (AutoML) - 9 Different Ways with Microsoft AI
For a complete tutorial, navigate to this Jupyter Notebook: https://github.com/microsoft/nni/blob/master/examples/notebooks/tabular_data_classification_in_AML.ipynb
-
[D] Efficient ways of choosing number of layers/neurons in a neural network
optuna, hyperopt, nni, plenty of less-known tools too.
-
Top 10 Developer Trends, Sun Oct 18 2020
microsoft / nni
What are some alternatives?
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
optuna - A hyperparameter optimization framework
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
FLAML - A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
jina - ☁️ Build multimodal AI applications with cloud-native stack
autogluon - Fast and Accurate ML in 3 Lines of Code
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
AutoML - This is a collection of our NAS and Vision Transformer work. [Moved to: https://github.com/microsoft/Cream]
transformer-deploy - Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
hyperopt - Distributed Asynchronous Hyperparameter Optimization in Python
tritony - Tiny configuration for Triton Inference Server
archai - Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.