xformers
SHARK-Studio


xformers | SHARK-Studio | |
---|---|---|
48 | 84 | |
9,053 | 1,437 | |
2.4% | 0.4% | |
9.3 | 7.4 | |
4 days ago | 4 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xformers
-
Practical Experience: Integrating Over 50 Neural Networks Into One Open-Source Project
Check xformers Compatibility Visit the xformers GitHub repo to ensure compatibility with your torch and CUDA versions. Support for older versions can be dropped, so staying updated is vital, especially if you're running CUDA 11.8 and want to leverage xformers for limited VRAM.
- An Interview with AMD CEO Lisa Su About Solving Hard Problems
- Animediff error
-
Colab | Errors when installing x-formers
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0+cu118 which is incompatible. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118) Python 3.10.13 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xformers version: 0.0.22.post3
-
FlashAttention-2, 2x faster than FlashAttention
This enables V1. V2 is still yet to be integrated into xformers. The team replied saying it should happen this week.
See the relevant Github issue here: https://github.com/facebookresearch/xformers/issues/795
-
Xformers issue
My Xformers doesnt work, any help see code. info ( Exception training model: 'Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers'. ) or
-
Having xformer troubles
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more
-
Question: these 4 crappy picture have been generated with the same seed and settings. Why they keep coming mildly different?
Xformers is a module that that can be used with Stable Diffusion. It decreases the memory required to generate an image as well as speeding things up. It works very well but there are two problems with Xformers:
-
Stuck trying to update xformers
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.1+cu118) Python 3.10.9 (you have 3.10.7) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details ================================================================================= You are running xformers 0.0.16rc425. The program is tested to work with xformers 0.0.17. To reinstall the desired version, run with commandline flag --reinstall-xformers. Use --skip-version-check commandline argument to disable this check. =================================================================================
-
Question about updating Xformers for A1111
# Your version of xformers is 0.0.16rc425. # xformers >= 0.0.17.dev is required to be available on the Dreambooth tab. # Torch 1 wheels of xformers >= 0.0.17.dev are no longer available on PyPI, # but you can manually download them by going to: https://github.com/facebookresearch/xformers/actions # Click on the most recent action tagged with a release (middle column). # Select a download based on your environment. # Unzip your download # Activate your venv and install the wheel: (from A1111 project root) cd venv/Scripts activate pip install {REPLACE WITH PATH TO YOUR UNZIPPED .whl file} # Then restart your project.
SHARK-Studio
- Llama 2 on ONNX runs locally
-
[D] Confusion over AMD GPU Ai benchmarking
https://github.com/AUTOMATIC1111/stable-diffusion-webui, https://github.com/nod-ai/SHARK, those are the repos for the open source tools mentioned. u/CeFurkan has really nice tutorial videos on YouTube for stable diffusion. Automatic1111 is the most popular open source stable diffusion ui and has the biggest open source plug-in ecosystem currently. Nvidia’s compute driver is separate from normal driver and called cuda. Amd’s compute driver is called rocm. Most windows programs like games use apis like directx, Vulkan,metal, web gpu and not cuda. Most ml code was originally intended to run in on scientific computing systems that were Linux. Today the traditional windows gpu apis are tying to get better at gpu ml supports. Amd has no official windows ml code support and is Hoping that other developers figure it out for them but amd made their ml driver open source but no support for consumer graphics cards. Nvidia is proprietary ml driver but guaranteed support across all cards including consumer
-
Amd Gpu not utilised
I got it working using SHARK with an AMD RX 480 on Windows 10.
-
New to SD - Slow working
Here the link for shark, faster (uses vulkan) than automatic1111 with directml but has less functions https://github.com/nod-ai/SHARK
-
7900 XTX Stable Diffusion Shark Nod Ai performance on Windows 10. Seem to have gotten a bump with the latest prerelease drivers 23.10.01.41
I would recommend trying out Nod AI's Shark (That is the link for the most recent 786.exe release), and see how it works for you. From others I've read, it does 512x512 pics at around 3 it/s, which I know isn't mind blowing, but it's good enough to do a pic in about 30 seconds.
-
New here
Problem solve, i had it to work i simply put this nod's ai shark exe in my stabble diffusion folder and launch it instead of Webui-user -> Release nod.ai SHARK 20230623.786 · nod-ai/SHARK (github.com)
-
I built the easiest-to-use desktop application for running Stable Diffusion on your PC - and it's free for all of you
How does it compare with Shark SD (I am not affiliated with it in any way)? (https://github.com/nod-ai/SHARK)
-
after changing GPU from RX 470 4gb to RTX 3060 12GB, I decided to make a few cozy houses, and these are a few of them
you should if you want to run SD on your card https://github.com/nod-ai/SHARK
-
20 minute load time per image on high end pc?
Forgive me for not reading you whole comment. I suspect you're version of the SD eb UI doesn't recognize the AMD GPU., so you're using the CPU. AMD GPUs only work with a few web UIs. Try Nod.ai's Shark variant
- AMD support for Microsoft® DirectML optimization of Stable Diffusion
What are some alternatives?
flash-attention - Fast and memory-efficient exact attention
stable-diffusion-webui - Stable Diffusion web UI
sdnext - SD.Next: All-in-one for AI generative image
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
ComfyUI - The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
InvokeAI - Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
stable-diffusion-webui-amdgpu - Stable Diffusion web UI
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
AMD-Stable-Diffusion-ONNX-FP16 - Example code and documentation on how to get FP16 models running with ONNX on AMD GPUs [Moved to: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16]
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]

