xformers
InvokeAI
xformers | InvokeAI | |
---|---|---|
48 | 241 | |
8,946 | 24,348 | |
1.2% | 1.4% | |
9.3 | 10.0 | |
7 days ago | 1 day ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xformers
-
Practical Experience: Integrating Over 50 Neural Networks Into One Open-Source Project
Check xformers Compatibility Visit the xformers GitHub repo to ensure compatibility with your torch and CUDA versions. Support for older versions can be dropped, so staying updated is vital, especially if you're running CUDA 11.8 and want to leverage xformers for limited VRAM.
- An Interview with AMD CEO Lisa Su About Solving Hard Problems
- Animediff error
-
Colab | Errors when installing x-formers
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0+cu118 which is incompatible. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118) Python 3.10.13 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xformers version: 0.0.22.post3
-
FlashAttention-2, 2x faster than FlashAttention
This enables V1. V2 is still yet to be integrated into xformers. The team replied saying it should happen this week.
See the relevant Github issue here: https://github.com/facebookresearch/xformers/issues/795
-
Xformers issue
My Xformers doesnt work, any help see code. info ( Exception training model: 'Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers'. ) or
-
Having xformer troubles
ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more
-
Question: these 4 crappy picture have been generated with the same seed and settings. Why they keep coming mildly different?
Xformers is a module that that can be used with Stable Diffusion. It decreases the memory required to generate an image as well as speeding things up. It works very well but there are two problems with Xformers:
-
Stuck trying to update xformers
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.1+cu118) Python 3.10.9 (you have 3.10.7) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details ================================================================================= You are running xformers 0.0.16rc425. The program is tested to work with xformers 0.0.17. To reinstall the desired version, run with commandline flag --reinstall-xformers. Use --skip-version-check commandline argument to disable this check. =================================================================================
-
Question about updating Xformers for A1111
# Your version of xformers is 0.0.16rc425. # xformers >= 0.0.17.dev is required to be available on the Dreambooth tab. # Torch 1 wheels of xformers >= 0.0.17.dev are no longer available on PyPI, # but you can manually download them by going to: https://github.com/facebookresearch/xformers/actions # Click on the most recent action tagged with a release (middle column). # Select a download based on your environment. # Unzip your download # Activate your venv and install the wheel: (from A1111 project root) cd venv/Scripts activate pip install {REPLACE WITH PATH TO YOUR UNZIPPED .whl file} # Then restart your project.
InvokeAI
- Invoke 5.0 – OSS Canvas with Layers and SD/SDXL/Flux Support
-
Why YC Went to DC
You're correct if you're focused exclusively on the work surrounding building foundation models to begin with. But if you take a broader view, having open models that we can legally fine tune and hack with locally has created a large and ever-growing community of builders and innovators that could not exist without these open models. Just take a look at projects like InvokeAI [0] in the image space or especially llama.cpp [1] in the text generation space. These projects are large, have lots of contributors, move very fast, and drive a lot of innovation and collaboration in applying AI to various domains in a way that simply wouldn't be possible without the open models.
[0] https://github.com/invoke-ai/InvokeAI
[1] https://github.com/ggerganov/llama.cpp
-
Stable Diffusion 3
Probably not, since I have no idea what you're talking about. I've just been using the models that InvokeAI (2.3, I only just now saw there's a 3.0) downloads for me [0]. The SD1.5 one is as good as ever, but the SD2 model introduces artifacts on (many, but not all) faces and copyrighted characters.
[0] https://github.com/invoke-ai/InvokeAI
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I actually used the rocm/pytorch image you also linked.
I'm not sure what you're pointing to with your reference to the Fedora-based images. I'm quite happy with my NixOS install and really don't want to switch to anything else. And as long as I have the correct kernel module, my host OS really shouldn't matter to run any of the images.
And I'm sure it can be made to work with many base images, my point was just that the dependency management around pytorch was in a bad state, where it is extremely easy to break.
> Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
It does! At least for me. It is my PR after all ;)
-
Can some expert analyze a github repo and tell us if it's really safe or not?
The data being flagged is not in that github repo, it's fetched from elsewhere and I don't fancy spending time looking for it. The alert is for 'Sirefef!cfg' which has been reported as a false positive with a bunch of other stable diffusion projects (https://www.reddit.com/r/StableDiffusion/comments/101zjec/trojanwin32sirefefcfg_an_apparently_common_false/, https://www.reddit.com/r/StableDiffusion/comments/xmhukb/trojan_in_waifudiffusion_model_file/, https://github.com/invoke-ai/InvokeAI/issues/2773 )
-
What is the most effcient port of SD to mac?
I haven’t tried it recently, but InvokeAI runs on Mac. Invoke. I used to run on my MacBook, but have since gotten a Win laptop.
-
Easy Stable Diffusion XL in your device, offline
There are already a number of local, inference options that are (crucially) open-source, with more robust feature sets.
And if the defense here is "but Auto1111 and Comfy don't have as user-friendly a UI", that's also already covered. https://github.com/invoke-ai/InvokeAI
-
Ask HN: Selfhosted ChatGPT and Stable-diffusion like alternatives?
https://github.com/invoke-ai/InvokeAI should work on your machine. For LLM models, the smaller ones should run using llama.cpp, but I don't think you'll be happy comparing them to ChatGPT.
- 🚀 InvokeAI 3.4 now supports LCM & LCM-LoRAs and much more!
-
Best ai image generator without a nsfw filter?
Stable Diffusion. /r/stablediffusion There are many tutorials on how to set it up locally and use it. InvokeAI is the easiest way to set it up. https://github.com/invoke-ai/InvokeAI
What are some alternatives?
flash-attention - Fast and memory-efficient exact attention
ComfyUI - The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
SHARK-Studio - SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution
ZLUDA - CUDA on non-NVIDIA GPUs
stable-diffusion-webui - Stable Diffusion web UI
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
Fooocus - Focus on prompting and generating
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
ultimate-upscale-for-automatic1111