HIP
stable-diffusion
HIP | stable-diffusion | |
---|---|---|
30 | 142 | |
3,462 | 2,438 | |
1.5% | - | |
8.9 | 9.8 | |
3 days ago | over 1 year ago | |
C++ | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HIP
-
Porting HPC Applications to AMD Instinct MI300A Using Unified Memory and OpenMP
>ROCm or HIP?
I'm not sure that's even the right question to ask. Afaik ROCm is the name of that entire tech stack and HIP is AMD's equivalent to CUDA C++ (they basically replicated the API and replaced every "CUDA" by "hip", they have functions called "hipmalloc" and "hipmemcpy").
The repository is located at https://github.com/ROCm/HIP.
- Hip: Runtime API and Kernel Language for Portable Apps for AMD and Nvidia GPUs
-
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
Is it perhaps because they want people to use HIP?
> HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> The HIPIFY tools automatically convert source from CUDA to HIP.
1. https://github.com/ROCm/HIP
-
AMD's Next GPU Is a 3D-Integrated Superchip
AMD has released HIP and a tool called HIPIFY which kind of behaves like this but at the source level¹. Rather than try and just translate CUDA to work on AMD compute they are more focused on higher level tooling.
Currently they seem to have a particular focus on AI frameworks and tools like PyTorch/Tensorflow/ONNX. They have sponsored and helped with a lot of PyTorch development for example, so PyTorch support for AMD is much better than it was this time last year².
¹(https://github.com/ROCm/HIP)
²(https://pytorch.org/blog/experience-power-pytorch-2.0/)
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
> what would be the point for someone to add ROCm support to various pieces of software which currently require CUDA
It isn't just old cards though, CUDA is a point of centralization on a single provider during a time when access to that providers higher end cards isn't even available and that is causing people to look elsewhere.
ROCm supports CUDA through the included HIP projects...
https://github.com/ROCm/HIP
https://github.com/ROCm/HIPCC
https://github.com/ROCm/HIPIFY
The later will regex replace your CUDA methods with HIP methods. If it is as easy as running hipify on your codebase (or just coding to HIP apis), it certainly makes sense to do so.
-
Nvidia on the Mountaintop
AMD's equivalent is HIP [1], for sufficiently flexible definitions of "equivalent". I can't speak to how complete/correct/performant it is (I'm just a guy running tutorial/toy-level ML stuff on an RDNA1 card), but part of AMD's problem is that it might not practically matter how well they do this because the broader ecosystem support specifically for the CUDA stack is so entrenched.
[1] https://github.com/ROCm-Developer-Tools/HIP
- Stable Diffusion in pure C/C++
- Would love to hear your information and knowledge to simplify my understanding on AMD's positioning in the AI market
-
Ask HN: C++ still dominates on GPUs, why not Rust?
From what I know, modern GPUs are still programmed with C++ exclusively. See CUDA [0] for Nvidia and ROCm [1] for AMD.
Why is this? Why Rust is not loved there?
[0] https://docs.nvidia.com/cuda/
[1] https://github.com/ROCm-Developer-Tools/HIP
-
[P] RWKV C++ Cuda library with no dependencies, no torch, and no python
Go ahead and try to ship ROCm code that works on multiple consumer graphics cards on Linux, MacOS, and Windows. As an example of how much AMD cares about it, the installation notes linked to in the readme returns a 404.
stable-diffusion
- [Stable Diffusion] Aide nécessaire à l'augmentation de la taille du fichier maximum sur l'installation locale
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
- I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
-
AI Seamless Texture Generator Built-In to Blender
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
-
Stable Diffusion GUIs for Apple Silicon
Stable Diffusion Dream Script: This is the original site/script for supporting macOS. I found this soon after Stable Diffusion was publicly released and it was the site which inspired me to try out using Stable Diffusion on a mac. They have a web-based UI (as well as command-line scripts) and a lot of documentation on how to get things working.
-
Still can't believe this technology is real. My talentless 2 minute sketch on the left.
I’m pretty sure it works for M2 as well - basically the newer ARM-based Macs. The instructions to get it working are detailed! https://github.com/lstein/stable-diffusion
What are some alternatives?
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
waifu-diffusion - stable diffusion finetuned on weeb stuff
ZLUDA - CUDA on AMD GPUs
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
futhark - :boom::computer::boom: A data-parallel functional programming language
stable-diffusion-webui - Stable Diffusion web UI
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
diffusers-uncensored - Uncensored fork of diffusers
ginkgo - Numerical linear algebra software package
txt2imghd - A port of GOBIG for Stable Diffusion
rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform
dream-textures - Stable Diffusion built-in to Blender