pytorch_dlprim
oneDNN
pytorch_dlprim | oneDNN | |
---|---|---|
3 | 5 | |
208 | 3,471 | |
- | 1.7% | |
5.9 | 10.0 | |
about 1 month ago | about 14 hours ago | |
C++ | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch_dlprim
-
Linus Tech Tips: "China doesn't want me to have this GPU [Moore Threads MTT S80]" (Linus Tech Tips Reviews the Moore Threads MTT S80 GPU)
I know PyTorch supports OpenCL nows and you can do training with it as well. See here. Never try it myself.
-
[P] OpenCL backend for PyTorch - progress works with mainstream pytorch
I'm working on PyTorch OpenCL backend based on dlprimitives core library. It exists for a while but until now it required building custom pytorch version.
- [P] Progress with OpenCL backend for pytorch
oneDNN
-
Blaze: A High Performance C++ Math library
If you are talking about non-small matrix multiplication in MKL, is now in opensource as a part of oneDNN. It literally has exactly the same code, as in MKL (you can see this by inspecting constants or doing high-precision benchmarks).
For small matmul there is libxsmm. It may take tremendous efforts make something faster than oneDNN and libxsmm, as jit-based approach of https://github.com/oneapi-src/oneDNN/blob/main/src/gpu/jit/g... is too flexible: if someone finds a better sequence, oneDNN can reuse it without major change of design.
But MKL is not limited to matmul, I understand it...
-
Arc & Deep Learning Frameworks
For completeness, it looks like this question was posted to the oneDNN GitHub repo and the response was to stay tune for updates.
- Keeping POWER relevant in the open source world
-
Intel oneDNN 2.5 released with experimental RISC-V support
From the release note of oneDNN v2.5:
-
Is gpu hardware tied to cpu ISA ?
Intel are trying to support their oneAPI compute framework on Arm and IBM POWER and z/Architecture (s390x) but since they ever released only a single discrete GPU with the Xe architecture it's unclear whether they'll support Xe GPU compute on e.g. ARM https://github.com/oneapi-src/oneDNN
What are some alternatives?
dlprimitives - Deep Learning Primitives and Mini-Framework for OpenCL
oneMKL - oneAPI Math Kernel Library (oneMKL) Interfaces
mace - MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
CTranslate2 - Fast inference engine for Transformer models
Boost.Compute - A C++ GPU Computing Library for OpenCL
oneDPL - oneAPI DPC++ Library (oneDPL) https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html
FluidX3D - The fastest and most memory efficient lattice Boltzmann CFD software, running on all GPUs via OpenCL.
highway - Highway - A Modern Javascript Transitions Manager
asmjit - Low-latency machine code generation
librealsense - IntelĀ® RealSenseā¢ SDK
Reloaded-II - Next Generation Universal .NET Core Powered Mod Loader compatible with anything X86, X64.
faasm - High-performance stateful serverless runtime based on WebAssembly