oneflow
dl_bench
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
oneflow
- OneFlow v0.9.0 Came Out!——A Distributed Deep Learning Framework
-
OneFlow v0.9.0 Came Out!
We are thrilled to announce the new release of OneFlow,, which is a deep learning framework designed to be user-friendly, scalable and efficient. OneFlow v0.9.0 contains 640 commits. For the full changelog, please check out: https://github.com/Oneflow-Inc/oneflow/releases/tag/v0.9.0.
-
[P]OneFlow v0.9.0 Came Out!
Found relevant code at https://github.com/Oneflow-Inc/oneflow + all code implementations here
-
[P] Probably the Fastest Open Source Stable Diffusion is released
Check out OneFlow on GitHub . We'd love to hear your feedback!
-
Probably the Fastest Open Source Stable Diffusion is released
OneFlow URL:https://github.com/Oneflow-Inc/oneflow/
-
[D] What framework are you using?
No other options?:) We are developing a new distributed DL framework called OneFlow, which is faster than other frameworks and easier to use. Now it provides more and better PyTorch compatible APIs.
-
[P]OneFlow v0.8.0 Came Out!
Code for https://arxiv.org/abs/2110.15032 found: https://github.com/Oneflow-Inc/oneflow
-
The Execution Process of a Tensor in Deep Learning Framework[R]
This article focuses on what is happening behind the execution of a Tensor in the deep learning framework OneFlow. It takes the operator oneflow.relu as an example to introduce the Interpreter and VM mechanisms that need to be relied on to execute this operator.
-
Explore MLIR Development Process
This article describes how OneFlow works with MLIR, how to add a graph-level Pass to OneFlow IR, how OneFlow Operations automatically become MLIR Operations, and why OneFlow IR can use MLIR to accelerate computations.
-
The History of Credit-based Flow Control (Part 1)
Backpressure mechanism, also known as credit-based flow control, is a classic scheme for network communication flow control problems. Its predecessor is the TCP sliding window. This idea is particularly simple and effective. As we will see in this article, based on the same principles, this idea is applicable to any flow control scheme and is found in the design of many hardware and software systems. In this article, the engineer of OneFlow will tell the chequered history of this simple idea.
dl_bench
-
[D] Deep Learning Framework Benchmark
I made a blog article about benchmarking deep learning framworks and the code is open sourced on github.
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
stable-diffusion-webui - Stable Diffusion web UI
MNN - MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
flashlight - A C++ standalone library for machine learning
serving - A flexible, high-performance serving system for machine learning models
tensorflow - An Open Source Machine Learning Framework for Everyone
ML-examples - Arm Machine Learning tutorials and examples
elbencho - A distributed storage benchmark for file systems, object stores & block devices with support for GPUs
signatory - Differentiable computations of the signature and logsignature transforms, on both CPU and GPU. (ICLR 2021)
onediff - OneDiff: An out-of-the-box acceleration library for diffusion models. [Moved to: https://github.com/siliconflow/onediff]