oneDNN
openvino
Our great sponsors
oneDNN | openvino | |
---|---|---|
5 | 17 | |
3,456 | 5,911 | |
2.5% | 6.6% | |
10.0 | 10.0 | |
6 days ago | 3 days ago | |
C++ | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
oneDNN
-
Blaze: A High Performance C++ Math library
If you are talking about non-small matrix multiplication in MKL, is now in opensource as a part of oneDNN. It literally has exactly the same code, as in MKL (you can see this by inspecting constants or doing high-precision benchmarks).
For small matmul there is libxsmm. It may take tremendous efforts make something faster than oneDNN and libxsmm, as jit-based approach of https://github.com/oneapi-src/oneDNN/blob/main/src/gpu/jit/g... is too flexible: if someone finds a better sequence, oneDNN can reuse it without major change of design.
But MKL is not limited to matmul, I understand it...
-
Arc & Deep Learning Frameworks
For completeness, it looks like this question was posted to the oneDNN GitHub repo and the response was to stay tune for updates.
- Keeping POWER relevant in the open source world
-
Intel oneDNN 2.5 released with experimental RISC-V support
From the release note of oneDNN v2.5:
-
Is gpu hardware tied to cpu ISA ?
Intel are trying to support their oneAPI compute framework on Arm and IBM POWER and z/Architecture (s390x) but since they ever released only a single discrete GPU with the Xe architecture it's unclear whether they'll support Xe GPU compute on e.g. ARM https://github.com/oneapi-src/oneDNN
openvino
- FLaNK Stack 05 Feb 2024
- QUIK is a method for quantizing LLM post-training weights to 4 bit precision
- Intel OpenVINO 2023.1.0 released
- Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
- OpenVINO 2023.1.0 released
- [N] Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference
-
Powering Anomaly Detection for Industry 4.0
Anomalib is an open-source deep learning library developed by Intel that makes it easy to benchmark different anomaly detection algorithms on both public and custom datasets, all by simply modifying a config file. As the largest public collection of anomaly detection algorithms and datasets, it has a strong focus on image-based anomaly detection. It’s a comprehensive, end-to-end solution that includes cutting-edge algorithms, relevant evaluation methods, prediction visualizations, hyperparameter optimization, and inference deployment code with Intel’s OpenVINO Toolkit.
What are some alternatives?
oneMKL - oneAPI Math Kernel Library (oneMKL) Interfaces
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
CTranslate2 - Fast inference engine for Transformer models
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
oneDPL - oneAPI DPC++ Library (oneDPL) https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-library.html
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
highway - Highway - A Modern Javascript Transitions Manager
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
asmjit - Low-latency machine code generation
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
librealsense - Intel® RealSense™ SDK
nebuly - The user analytics platform for LLMs