dlprimitives
oneDNN
dlprimitives | oneDNN | |
---|---|---|
7 | 5 | |
156 | 3,461 | |
- | 1.7% | |
3.8 | 10.0 | |
5 months ago | 6 days ago | |
C++ | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dlprimitives
- Dlprimitives: Deep Learning Primitives and Mini-Framework for OpenCL
-
[P] OpenCL backend for PyTorch - progress works with mainstream pytorch
I'm working on PyTorch OpenCL backend based on dlprimitives core library. It exists for a while but until now it required building custom pytorch version.
-
[P] DLPrimitives - wondering about best development direction
BTW Performance numbers: https://github.com/artyom-beilis/dlprimitives/blob/master/docs/benchmarks/benchmarks-gtx1080.md (I just added below TF2 that is missing in docs)
-
[P] DLPrimitives - an OpenCL miro-framework and inference library
Full benchmarks can be found there: https://github.com/artyom-beilis/dlprimitives/blob/master/docs/summary.md
- [P] OpenCL Deep Learning Primitives Library
oneDNN
-
Blaze: A High Performance C++ Math library
If you are talking about non-small matrix multiplication in MKL, is now in opensource as a part of oneDNN. It literally has exactly the same code, as in MKL (you can see this by inspecting constants or doing high-precision benchmarks).
For small matmul there is libxsmm. It may take tremendous efforts make something faster than oneDNN and libxsmm, as jit-based approach of https://github.com/oneapi-src/oneDNN/blob/main/src/gpu/jit/g... is too flexible: if someone finds a better sequence, oneDNN can reuse it without major change of design.
But MKL is not limited to matmul, I understand it...
-
Arc & Deep Learning Frameworks
For completeness, it looks like this question was posted to the oneDNN GitHub repo and the response was to stay tune for updates.
- Keeping POWER relevant in the open source world
-
Intel oneDNN 2.5 released with experimental RISC-V support
From the release note of oneDNN v2.5:
-
Is gpu hardware tied to cpu ISA ?
Intel are trying to support their oneAPI compute framework on Arm and IBM POWER and z/Architecture (s390x) but since they ever released only a single discrete GPU with the Xe architecture it's unclear whether they'll support Xe GPU compute on e.g. ARM https://github.com/oneapi-src/oneDNN
What are some alternatives?
tensorflow-opencl - OpenCL support for TensorFlow
oneMKL - oneAPI Math Kernel Library (oneMKL) Interfaces
plaidml - PlaidML is a framework for making deep learning work everywhere.
CTranslate2 - Fast inference engine for Transformer models
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
oneDPL - oneAPI DPC++ Library (oneDPL) https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html
pytorch-coriander - OpenCL build of pytorch - (in-progress, not useable)
highway - Highway - A Modern Javascript Transitions Manager
pytorch_dlprim - DLPrimitives/OpenCL out of tree backend for pytorch
asmjit - Low-latency machine code generation
ParallelReductionsBenchmark - Thrust, CUB, TBB, AVX2, CUDA, OpenCL, OpenMP, SyCL - all it takes to sum a lot of numbers fast!
librealsense - IntelĀ® RealSenseā¢ SDK