llvm
onnxruntime
llvm | onnxruntime | |
---|---|---|
10 | 54 | |
1,166 | 12,804 | |
3.9% | 3.3% | |
10.0 | 10.0 | |
7 days ago | 3 days ago | |
C++ | ||
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llvm
-
Vcc – The Vulkan Clang Compiler
Intel's modern compilers (icx, icpx) are clang-based. There is an open-source version [1], and the closed-source version is built atop of this with extra closed-source special sauce.
AOCC and ROCm are also based on LLVM/clang.
[1] https://github.com/intel/llvm
-
device::aspects ?
You are not missing anything spec-wise, it is just that particular version of the compiler/runtime doesn't support that query. Support for it was added in intel/llvm#7937 and it should be available in the next oneAPI release.
-
How to install OpenCL for AMD CPU?
Install the Intel OpenCL CPU Runtime. AMD CPUs are x86-64 too, so they work just like Intel CPUs do. Afaik, performance is significantly better than with POCL. This also works with EPYC, like the new 96-core Genoa.
-
Modern Software Development Tools and oneAPI Part 2
The Meson build system Version: 1.0.0 Source dir: /var/home/sri/Projects/simple-oneapi Build dir: /var/home/sri/Projects/simple-oneapi/builddir Build type: native build Project name: simple-oneapi Project version: 0.1.0 C compiler for the host machine: clang (clang 16.0.0 "clang version 16.0.0 (https://github.com/intel/llvm 08be083e07b1fd6437267e26adb92f1b647d57dd)") C linker for the host machine: clang ld.bfd 2.34 C++ compiler for the host machine: clang++ (clang 16.0.0 "clang version 16.0.0 (https://github.com/intel/llvm 08be083e07b1fd6437267e26adb92f1b647d57dd)") C++ linker for the host machine: clang++ ld.bfd 2.34 Host machine cpu family: x86_64 Host machine cpu: x86_64 Build targets in project: 1 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/home/sri/.local/bin/ninja
-
Modern Software Development Tools and oneAPI Part 1
$ sudo mkdir -p /opt/intel $ sudo mkdir -p /etc/OpenCL/vendors/intel_fpgaemu.icd $ cd /tmp $ wget https://github.com/intel/llvm/releases/download/2022-WW50/oclcpuexp-2022.15.12.0.01_rel.tar.gz $ wget https://github.com/intel/llvm/releases/download/2022-WW50/fpgaemu-2022.15.12.0.01_rel.tar.gz $ sudo bash # cd /opt/intel # mkdir oclfpgaemu- # cd oclfpgaemu- # tar xvfpz /tmp/fpgaemu-2022.15.12.0.01_rel.tar.gz # cd .. # mkdir oclcpuexp_ # cd oclcpuexp- # tar xvfpz /tmp/oclcpuexp- # cd ..
-
Cross Platform Computing Framework?
oneAPI includes an implementation of SYCL called DPC++. This implementation supports Intel, Nvidia and AMD GPUs (currently for Nvidia and AMD you need to build the support from the source) but oneAPI also includes some libraries too like oneDNN and oneMKL that use SYCL.
-
Does an actually general purpose GPGPU solution exist?
Yes, you can use multiple backends with the same compiled binary. For example you can use DPC++ with Nvidia, AMD and Intel GPU at the same time. ComputeCpp also has the ability to output a binary that can target multiple targets. Each backend generates the ISA for each GPU, and then the SYCL runtime chooses the right one at execution time. There is no ODR violation because each GPU executable is stored on separate ELF sections and loaded at runtime : the C++ linker does not see them. The code doesn't need to have any layers, the only changes you might (but don't have to) make are to optimize for specific processor features.
-
Why Does SYCL Have Different Implementations, and What Version to Use for GPGPU Computing(With Slower CPU Mode for Testing/No Gpu Machines)?
Intel LLVM SYCL oneAPI DPC++ - an open source implementation of SYCL that is being contributed to the LLVM project
-
How to set up Intel oneAPI?
I'm using intel cpu, and after reading this i'm just curious can i set this up with portage? Are there any ebuilds to build this? Do i need whole toolchain from intel site (3Gb+) or just 300 mb tar from their github?
- Benchmarking Division and Libdivide on Apple M1 and Intel AVX512
onnxruntime
-
Machine Learning with PHP
ONNX Runtime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
-
AI Inference now available in Supabase Edge Functions
Embedding generation uses the ONNX runtime under the hood. This is a cross-platform inferencing library that supports multiple execution providers from CPU to specialized GPUs.
-
Deep Learning in JavaScript
tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browsdr.
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
[1]: https://www.circle-lang.org/
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- ONNX runtime: Cross-platform accelerated machine learning
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
What are some alternatives?
pocl - pocl - Portable Computing Language
onnx - Open standard for machine learning interoperability
oneTBB - oneAPI Threading Building Blocks (oneTBB)
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
onnx-simplifier - Simplify your onnx model
meson - The Meson Build System
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
OCL-SDK
onnx-tensorflow - Tensorflow Backend for ONNX
featuresupport
MLflow - Open source platform for the machine learning lifecycle