serving
flashlight
Our great sponsors
serving | flashlight | |
---|---|---|
12 | 16 | |
6,071 | 5,145 | |
0.2% | 1.1% | |
9.8 | 7.7 | |
1 day ago | 23 days ago | |
C++ | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
serving
-
Llama.cpp: Full CUDA GPU Acceleration
Yet another TEDIOUS BATTLE: Python vs. C++/C stack.
This project gained popularity due to the HIGH DEMAND for running large models with 1B+ parameters, like `llama`. Python dominates the interface and training ecosystem, but prior to llama.cpp, non-ML professionals showed little interest in a fast C++ interface library. While existing solutions like tensorflow-serving [1] in C++ were sufficiently fast with GPU support, llama.cpp took the initiative to optimize for CPU and trim unnecessary code, essentially code-golfing and sacrificing some algorithm correctness for improved performance, which isn't favored by "ML research".
NOTE: In my opinion, a true pioneer was DarkNet, which implemented the YOLO model series and significantly outperformed others [2]. Same trick basically like llama.cpp
[1] https://github.com/tensorflow/serving
-
[D] How do OpenAI and other companies manage to have real-time inference on model with billions of parameters over an API?
I mean, probably - it's written in C++ https://github.com/tensorflow/serving
-
Should I wait for the M2 Macbook Pro?
We’re looking into that solution at the moment, the issue I’m referring to is related to this https://github.com/tensorflow/serving/issues/1948 we’ll know if the plug-in approach works for our uses soon but haven’t started looking into implementing it yet
- TF Serving has been unavailable for 9 days so far due to outdated GPG key
- TF Serving has been unavailable for 8 days
-
Would you use maturin for ML model serving?
Which ML framework do you use? Tensorflow has https://github.com/tensorflow/serving. You could also use the Rust bindings to load a saved model and expose it using one of the Rust HTTP servers. It doesn't matter whether you trained your model in Python as long as you export its saved model.
-
Is LaMDA Sentient? – An Interview [pdf]
Most likely it's a model server running something like https://github.com/tensorflow/serving and if there isn't a lot of load, the resource could kill some of its tasks. I wouldn't imagine it's sitting around pondering deep thoughts.
-
Ask HN: How to deploy a TensorFlow model for access through an HTTP endpoint?
https://github.com/tensorflow/serving
https://thenewstack.io/tutorial-deploying-tensorflow-models-...
-
Popular Machine Learning Deployment Tools
GitHub
-
If data science uses a lot of computational power, then why is python the most used programming language?
You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.
flashlight
-
MatX: Efficient C++17 GPU numerical computing library with Python-like syntax
I think a comparison to PyTorch, TensorFlow and/or JAX is more relevant than a comparison to CuPy/NumPy.
And then maybe also a comparison to Flashlight (https://github.com/flashlight/flashlight) or other C/C++ based ML/computing libraries?
Also, there is no mention of it, so I suppose this does not support automatic differentiation?
-
Project Resources
This Facebook ai project seems reasonably structured after looking at its CMakeLists.txt. CMake is a build generator for c++, it's how you make binaries to run your project: https://github.com/flashlight/flashlight
-
Meta AI Open Sources Flashlight: Fast and Flexible Machine Learning Toolkit in C++
Continue reading | Check out the paper and github link
- Flashlight: A C++ standalone library for machine learning
-
[D] Deep Learning Framework for C++.
I built and maintain Flashlight, a C++-first library for ML/DL. We built Flashlight to be:
- [R] C++ for Machine Learning
-
What is the most used library for AI in C++ ?
I’ve never used it, but Facebook’s flashlight looks interesting
-
Python.
Flashlight bro, not flash. Read again
-
Mozilla Common Voice Adds 16 New Languages and 4,600 New Hours of Speech
I've had good results with https://github.com/flashlight/flashlight/blob/master/flashli.... Seems to work well with spoken english in a variety of accents. Biggest limitation is that the architecture they have pretrained models for doesn't really work well with clips longer than ~15 seconds, so you have to segment your input files.
- [D] C++ in Machine Learning.
What are some alternatives?
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
MNN - MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
XLA.jl - Julia on TPUs
PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
oneflow - OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
STT - 🐸STT - The deep learning toolkit for Speech-to-Text. Training and deploying STT models has never been so easy.
glow - Compiler for Neural Network hardware accelerators
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
runtime - A performant and modular runtime for TensorFlow
DNS-Challenge - This repo contains the scripts, models, and required files for the Deep Noise Suppression (DNS) Challenge.