taming-transformers
stable-diffusion-webui
Our great sponsors
taming-transformers | stable-diffusion-webui | |
---|---|---|
35 | 2,808 | |
5,354 | 129,299 | |
3.9% | - | |
0.0 | 9.9 | |
about 1 month ago | 5 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
taming-transformers
-
Automatic1111 for Intel Arc (A380 Tested)
taming-transformers
-
[R] My simple Transformer audio encoder gives the same output for each timestep after the encoder
What’s your goal exactly? Are you trying to make a transformer based auto encoder of audio spectrograms? If so you should either start with either a proven ViT-based AE implementation (either a VAE or a VQ-GAN). But I don’t see why you necessarily need a ViT for this, if you’re working at a much smaller scale a convolutional architecture is plenty and much more amenable to beginners. See https://github.com/CompVis/taming-transformers for an example of a convolutional VQ GAN.
- Trying to make VqGAN+CLIP work again
-
im so lost
Command: "git" clone "https://github.com/CompVis/taming-transformers.git" "C:\AI\stable-diffusion-webui\repositories\taming-transformers"
-
Why is ChatGPT and other large language models not feasible to be used locally in consumer grade hardware while Stable Diffusion is?
See https://arxiv.org/abs/2012.09841 for prior work. SD authors swap out the Transformer and language modelling objective with a UNet diffusion objective. In general, the more inductive bias your model has, the more efficient it can be. ChatGPT runs purely on a Transformer architecture, which has far fewer priors than a CNN and requires far more parameters as a result. This may not be the case in the future.
-
1 or 2 Errors Installing Automatic1111 on Mac M1
There is definitely a cmd but I can't tell you. It's on GitHub https://github.com/CompVis/taming-transformers
-
Trying to Install InvokeAI and VectorQuantizer2 and taming modules but get error “zsh: parse error near `)’” How to fix? (MAC M1)
I wasn’t able to find a “taming” folder within the site-packages folder so I decided to look up how to get VectorQuantizer2 and taming.modules.vqvae.quantize and found this link: https://github.com/CompVis/taming-transformers/blob/master/taming/modules/vqvae/quantize.py I copied the raw contents and pasted that to the terminal and I got this error: “zsh: parse error near `)’” I’m not sure how to fix this so I can install VectorQuantizer2 so I can use InvokeAI. How do I fix this?
-
AI Is Coming For Commercial Art Jobs. Can It Be Stopped? (Greg Rutkowski quoted)
I say this to everyone... Even if SD and the model is legit and legal. Do not go around commercialising it's outputs or claiming ownership over them... and if you do the properly cite the source of the model and system along with it. In https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers and https://huggingface.co/CompVis/stable-diffusion-v1-4 there are citiations provided for you to use for a reason. I recommend you to use them.
-
Stable-diffusion in Nix
# Copy models as described in README cp ~/Downloads/model.ckpt . cp ~/Downloads/GFPGANv1.3.pth . # Clone other repos as mentioned in README mkdir repositories git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone https://github.com/salesforce/BLIP.git repositories/BLIP export NIXPKGS_ALLOW_UNFREE=1 nix-shell default.nix pip install torch --extra-index-url https://download.pytorch.org/whl/cu113 # Also from linux instructions. Can probably be added to default.nix python webui.py
-
[D] Where does VQ-GAN get its randomness from?
Code for https://arxiv.org/abs/2012.09841 found: https://compvis.github.io/taming-transformers/
stable-diffusion-webui
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8
What are some alternatives?
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
SHARK - SHARK - High Performance Machine Learning Distribution
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stable-diffusion - A latent text-to-image diffusion model
safetensors - Simple, safe way to store and distribute tensors