serving
glow
Our great sponsors
serving | glow | |
---|---|---|
12 | 6 | |
6,070 | 3,145 | |
0.2% | 1.2% | |
9.8 | 8.1 | |
6 days ago | 5 days ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
serving
-
Llama.cpp: Full CUDA GPU Acceleration
Yet another TEDIOUS BATTLE: Python vs. C++/C stack.
This project gained popularity due to the HIGH DEMAND for running large models with 1B+ parameters, like `llama`. Python dominates the interface and training ecosystem, but prior to llama.cpp, non-ML professionals showed little interest in a fast C++ interface library. While existing solutions like tensorflow-serving [1] in C++ were sufficiently fast with GPU support, llama.cpp took the initiative to optimize for CPU and trim unnecessary code, essentially code-golfing and sacrificing some algorithm correctness for improved performance, which isn't favored by "ML research".
NOTE: In my opinion, a true pioneer was DarkNet, which implemented the YOLO model series and significantly outperformed others [2]. Same trick basically like llama.cpp
[1] https://github.com/tensorflow/serving
-
[D] How do OpenAI and other companies manage to have real-time inference on model with billions of parameters over an API?
I mean, probably - it's written in C++ https://github.com/tensorflow/serving
-
Should I wait for the M2 Macbook Pro?
We’re looking into that solution at the moment, the issue I’m referring to is related to this https://github.com/tensorflow/serving/issues/1948 we’ll know if the plug-in approach works for our uses soon but haven’t started looking into implementing it yet
- TF Serving has been unavailable for 9 days so far due to outdated GPG key
- TF Serving has been unavailable for 8 days
-
Would you use maturin for ML model serving?
Which ML framework do you use? Tensorflow has https://github.com/tensorflow/serving. You could also use the Rust bindings to load a saved model and expose it using one of the Rust HTTP servers. It doesn't matter whether you trained your model in Python as long as you export its saved model.
-
Is LaMDA Sentient? – An Interview [pdf]
Most likely it's a model server running something like https://github.com/tensorflow/serving and if there isn't a lot of load, the resource could kill some of its tasks. I wouldn't imagine it's sitting around pondering deep thoughts.
-
Ask HN: How to deploy a TensorFlow model for access through an HTTP endpoint?
https://github.com/tensorflow/serving
https://thenewstack.io/tutorial-deploying-tensorflow-models-...
-
Popular Machine Learning Deployment Tools
GitHub
-
If data science uses a lot of computational power, then why is python the most used programming language?
You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.
glow
-
Accelerating AI inference?
Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
US government bans export of NVIDIA A100 to China and Russia, effective immediately
I also disagree with this. For example, Meta seems desperate about AI accelerators, and in fact is already doing "hardware customers develop software stack themselves" I mentioned above: Glow is that stack. Meta is doing Glow even if there is no promising AI accelerators right now, they are that desperate.
-
If data science uses a lot of computational power, then why is python the most used programming language?
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).
-
Esperanto Champions the Efficiency of Its 1,092-Core RISC-V Chip
The main reasons are hiring, and depth and breadth of the product.
Compilers are hard, device support is hard, the compiler community is small and closed source compilers quickly become weird tech islands.
https://github.com/pytorch/glow
- From Julia to Rust
What are some alternatives?
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
MNN - MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
XLA.jl - Julia on TPUs
flashlight - A C++ standalone library for machine learning
StaticArrays.jl - Statically sized arrays for Julia
egg - egg is a flexible, high-performance e-graph library
oneflow - OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
runtime - A performant and modular runtime for TensorFlow
Catlab.jl - A framework for applied category theory in the Julia language