Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
[D][R] Deploying deep models on memory constrained devices
2 projects | /r/MachineLearning | 3 Oct 2023
However, I am looking on this subject through the problem of training/finetuning deep models on the edge devices, being increasingly available thing to do. Looking at tflite, alibaba's MNN, mit-han-lab's tinyengine etc..
What’s New in TensorFlow 2.10?
4 projects | news.ycombinator.com | 6 Sep 2022
There are a ton of mobile deployment options that support PyTorch+TF models. It's hard to argue TFLite is the best.
Llama.cpp: Full CUDA GPU Acceleration
14 projects | news.ycombinator.com | 12 Jun 2023
Yet another TEDIOUS BATTLE: Python vs. C++/C stack.
This project gained popularity due to the HIGH DEMAND for running large models with 1B+ parameters, like `llama`. Python dominates the interface and training ecosystem, but prior to llama.cpp, non-ML professionals showed little interest in a fast C++ interface library. While existing solutions like tensorflow-serving  in C++ were sufficiently fast with GPU support, llama.cpp took the initiative to optimize for CPU and trim unnecessary code, essentially code-golfing and sacrificing some algorithm correctness for improved performance, which isn't favored by "ML research".
NOTE: In my opinion, a true pioneer was DarkNet, which implemented the YOLO model series and significantly outperformed others . Same trick basically like llama.cpp
Would you use maturin for ML model serving?
2 projects | /r/rust | 8 Jul 2022
Which ML framework do you use? Tensorflow has https://github.com/tensorflow/serving. You could also use the Rust bindings to load a saved model and expose it using one of the Rust HTTP servers. It doesn't matter whether you trained your model in Python as long as you export its saved model.
Popular Machine Learning Deployment Tools
4 projects | dev.to | 16 Apr 2022
If data science uses a lot of computational power, then why is python the most used programming language?
6 projects | /r/learnmachinelearning | 13 Apr 2022
You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.
Exposing Tensorflow Serving’s gRPC Endpoints on Amazon EKS
2 projects | dev.to | 10 Feb 2021
gRPC only connects to a host and port — but we can use whatever service route we want. Above I use the path we configured in our k8s ingress object: /service1, and overwrite the base configuration provided by tensorflow serving. When we call the tfserving_metadata function above, we specify /service1 as an argument.
What are some alternatives?
tensorflow - An Open Source Machine Learning Framework for Everyone
ML-examples - Arm Machine Learning tutorials and examples
TNN - TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
oneflow - OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
flashlight - A C++ standalone library for machine learning
XLA.jl - Julia on TPUs
glow - Compiler for Neural Network hardware accelerators
OpenMLDB - OpenMLDB is an open-source machine learning database that provides a feature platform computing consistent features for training and inference.
julia - The Julia Programming Language
runtime - A performant and modular runtime for TensorFlow