oneflow
serving
Our great sponsors
oneflow | serving | |
---|---|---|
32 | 12 | |
5,721 | 6,071 | |
1.8% | 0.2% | |
8.4 | 9.8 | |
3 days ago | 1 day ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
oneflow
- OneFlow v0.9.0 Came Out!——A Distributed Deep Learning Framework
-
OneFlow v0.9.0 Came Out!
We are thrilled to announce the new release of OneFlow,, which is a deep learning framework designed to be user-friendly, scalable and efficient. OneFlow v0.9.0 contains 640 commits. For the full changelog, please check out: https://github.com/Oneflow-Inc/oneflow/releases/tag/v0.9.0.
-
[P]OneFlow v0.9.0 Came Out!
Found relevant code at https://github.com/Oneflow-Inc/oneflow + all code implementations here
-
[P] Probably the Fastest Open Source Stable Diffusion is released
Check out OneFlow on GitHub . We'd love to hear your feedback!
-
Probably the Fastest Open Source Stable Diffusion is released
OneFlow URL:https://github.com/Oneflow-Inc/oneflow/
-
[D] What framework are you using?
No other options?:) We are developing a new distributed DL framework called OneFlow, which is faster than other frameworks and easier to use. Now it provides more and better PyTorch compatible APIs.
-
[P]OneFlow v0.8.0 Came Out!
Code for https://arxiv.org/abs/2110.15032 found: https://github.com/Oneflow-Inc/oneflow
-
The Execution Process of a Tensor in Deep Learning Framework[R]
This article focuses on what is happening behind the execution of a Tensor in the deep learning framework OneFlow. It takes the operator oneflow.relu as an example to introduce the Interpreter and VM mechanisms that need to be relied on to execute this operator.
-
Explore MLIR Development Process
This article describes how OneFlow works with MLIR, how to add a graph-level Pass to OneFlow IR, how OneFlow Operations automatically become MLIR Operations, and why OneFlow IR can use MLIR to accelerate computations.
-
The History of Credit-based Flow Control (Part 1)
Backpressure mechanism, also known as credit-based flow control, is a classic scheme for network communication flow control problems. Its predecessor is the TCP sliding window. This idea is particularly simple and effective. As we will see in this article, based on the same principles, this idea is applicable to any flow control scheme and is found in the design of many hardware and software systems. In this article, the engineer of OneFlow will tell the chequered history of this simple idea.
serving
-
Llama.cpp: Full CUDA GPU Acceleration
Yet another TEDIOUS BATTLE: Python vs. C++/C stack.
This project gained popularity due to the HIGH DEMAND for running large models with 1B+ parameters, like `llama`. Python dominates the interface and training ecosystem, but prior to llama.cpp, non-ML professionals showed little interest in a fast C++ interface library. While existing solutions like tensorflow-serving [1] in C++ were sufficiently fast with GPU support, llama.cpp took the initiative to optimize for CPU and trim unnecessary code, essentially code-golfing and sacrificing some algorithm correctness for improved performance, which isn't favored by "ML research".
NOTE: In my opinion, a true pioneer was DarkNet, which implemented the YOLO model series and significantly outperformed others [2]. Same trick basically like llama.cpp
[1] https://github.com/tensorflow/serving
-
[D] How do OpenAI and other companies manage to have real-time inference on model with billions of parameters over an API?
I mean, probably - it's written in C++ https://github.com/tensorflow/serving
-
Should I wait for the M2 Macbook Pro?
We’re looking into that solution at the moment, the issue I’m referring to is related to this https://github.com/tensorflow/serving/issues/1948 we’ll know if the plug-in approach works for our uses soon but haven’t started looking into implementing it yet
- TF Serving has been unavailable for 9 days so far due to outdated GPG key
- TF Serving has been unavailable for 8 days
-
Would you use maturin for ML model serving?
Which ML framework do you use? Tensorflow has https://github.com/tensorflow/serving. You could also use the Rust bindings to load a saved model and expose it using one of the Rust HTTP servers. It doesn't matter whether you trained your model in Python as long as you export its saved model.
-
Is LaMDA Sentient? – An Interview [pdf]
Most likely it's a model server running something like https://github.com/tensorflow/serving and if there isn't a lot of load, the resource could kill some of its tasks. I wouldn't imagine it's sitting around pondering deep thoughts.
-
Ask HN: How to deploy a TensorFlow model for access through an HTTP endpoint?
https://github.com/tensorflow/serving
https://thenewstack.io/tutorial-deploying-tensorflow-models-...
-
Popular Machine Learning Deployment Tools
GitHub
-
If data science uses a lot of computational power, then why is python the most used programming language?
You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
stable-diffusion-webui - Stable Diffusion web UI
MNN - MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
flashlight - A C++ standalone library for machine learning
XLA.jl - Julia on TPUs
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
glow - Compiler for Neural Network hardware accelerators
tensorflow - An Open Source Machine Learning Framework for Everyone
runtime - A performant and modular runtime for TensorFlow