HIPIFY
Numba
HIPIFY | Numba | |
---|---|---|
11 | 124 | |
318 | 9,493 | |
- | 1.5% | |
0.0 | 9.9 | |
5 months ago | 2 days ago | |
C++ | Python | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HIPIFY
-
AMD Hip SDK: Making CUDA Applications Run Across Consumer, Pro GPUs and APUs
Right. I can't speak to its correctness/completeness as I've only done a quick installation and smoke test of the ROCm/HIP/MIOpen stack, but there's even a tool that automates the translation [1].
[1] https://github.com/ROCm-Developer-Tools/HIPIFY
- How to run Llama 13B with a 6GB graphics card
-
How Nvidia’s CUDA Monopoly in Machine Learning Is Breaking
From https://news.ycombinator.com/item?id=32904285 re: AMD Rocm, HIPIFY, :
>> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
> AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
-
Stable Diffusion on AMD RDNA3
> Thus, the idea is that through typically negligible effort porting to HiP, your code becomes vendor-independent.
Here, the big AMD mistake was to rename those function prefixes in the first place. It's a mistake that they could have avoided...
What a lot of SW codebases did to support AMD (see PyTorch code notably): codebase is still CUDA, have the conversion pass to HIP done at build time.
See https://github.com/ROCm-Developer-Tools/HIPIFY/blob/amd-stag... for the Perl script to do it.
Then comes the problem of AMD not supporting ROCm HIP on most of their hardware or user base.
On Windows, the ROCm HIP SDK is private and only available under NDA. This means that while you can use Blender w/ HIP on Windows, the Blender builds that you compile yourself will not be able to use ROCm HIP.
On Linux, the supported GPUs are few and far between, Vega20 onwards are supported today. APUs, RDNA1, and lower end RDNA2 w/o unsupported hacks (6700 XT and below) are excluded.
-
AI Seamless Texture Generator Built-In to Blender
https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation
ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :
> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :
> GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify
ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:
> HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:
> - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.
> - HIP allows developers to use the "best" development environment and tools on each target platform.
> - The [HIPIFY] tools automatically convert source from CUDA to HIP.
> - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*
-
单位要求五一之后上缴旧电脑,统一换国产新电脑、新系统,由于不兼容windows软件,所以还要装个windows模拟器,导致办公效率倒退10年。主任吐槽说,这不是用落后代替先进么,我心说连他都看出来了。
并且有一个自动转换工具 https://github.com/ROCm-Developer-Tools/HIPIFY https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html
- Hipify: Convert CUDA to Portable C++ Code
- Hipify: Convert CUDA to Portable Hip C++ Code
-
Deep Learning options on Radeon RX 6800
It might be worth checking out HIPIFY, which lets you automatically convert CUDA code to vendor neutral code that can be run on any GPU. Disclaimer, I have never used it and have no idea how it works.
-
Will NVIDIA's cryptocurrency limiter interfere with nouveau drivers?
CUDA zu AMD HIP conversion: https://github.com/ROCm-Developer-Tools/HIPIFY
Numba
-
Mojo🔥: Head -to-Head with Python and Numba
Around the same time, I discovered Numba and was fascinated by how easily it could bring huge performance improvements to Python code.
-
Is anyone using PyPy for real work?
Simulations are, at least in my experience, numba’s [0] wheelhouse.
[0]: https://numba.pydata.org/
-
Any data folks coding C++ and Java? If so, why did you leave Python?
That's very cool. Numba introduces just-in-time compilation to Python via decorators and its sole reason for being is to turn everything it can into abstract syntax trees.
- Using Matplotlib with Numba to accelerate code
-
Python Algotrading with Machine Learning
A super-fast backtesting engine built in NumPy and accelerated with Numba.
-
PYTHON vs OCTAVE for Matlab alternative
Regarding speed, I don't agree this is a good argument against Python. For example, it seems no one here has yet mentioned numba, a Python JIT compiler. With a simple decorator you can compile a function to machine code with speeds on par with C. Numba also allows you to easily write cuda kernels for GPU computation. I've never had to drop down to writing C or C++ to write fast and performant Python code that does computationally demanding tasks thanks to numba.
-
Codon: Python Compiler
Just for reference,
* Nuitka[0] "is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11."
* Pypy[1] "is a replacement for CPython" with builtin optimizations such as on the fly JIT compiles.
* Cython[2] "is an optimising static compiler for both the Python programming language and the extended Cython programming language... makes writing C extensions for Python as easy as Python itself."
* Numba[3] "is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code."
* Pyston[4] "is a performance-optimizing JIT for Python, and is drop-in compatible with ... CPython 3.8.12"
[0] https://github.com/Nuitka/Nuitka
[1] https://www.pypy.org/
[2] https://cython.org/
[3] https://numba.pydata.org/
[4] https://github.com/pyston/pyston
-
This new programming language has the potential to make python (the dominant language for AI) run 35,000X faster.
For the benefit of future readers: https://numba.pydata.org/
-
Two-tier programming language
Taichi (similar to numba) is a python library that allows you to write high speed code within python. So your program consists of slow python that gets interpreted regularly, and fast python (fully type annotated and restricted to a subset of the language) that gets parallellized and jitted for CPU or GPU. And you can mix the two within the same source file.
- Numba Supports Python 3.11
What are some alternatives?
ZLUDA - CUDA on AMD GPUs
NetworkX - Network Analysis in Python
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
ncnn - ncnn is a high-performance neural network inference framework optimized for the mobile platform
Dask - Parallel computing with task scheduling
llama-cpp-python - Python bindings for llama.cpp
cupy - NumPy & SciPy for GPU
rocm-build - build scripts for ROCm
Pyjion - Pyjion - A JIT for Python based upon CoreCLR
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
SymPy - A computer algebra system written in pure Python