stable-diffusion-tensorflow
diffusers
stable-diffusion-tensorflow | diffusers | |
---|---|---|
18 | 266 | |
1,569 | 22,646 | |
- | 2.8% | |
0.0 | 9.9 | |
9 months ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-tensorflow
- Keras model SD or similar I can train from scratch?
-
Anyone attempted to convert stablediffusion tensorflow to tf lite?
was curious if someone attempted the conversion? I tried here https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58 but having some input shapes error. First time trying the conversion here, would love to run it on a edge tpu.
-
Stable Diffusion Tensorflow to TF Lite
Checking here is someone tried to convert the tensorflow diffusion model into a tf lite?https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58
-
SD on intel arc?
Actually I was just on GitHub trying to submit issues related to me testing Intel's PyTorch and Tensorflow extensions when I saw this; it seems that someone has already ported SD over to the tensorflow framework and so you can probably start using intel's extension for tensorflow with it immediately; and according to this article you can use Intel's extension within WSL under windows as well. But unfortunately given how the guy whose issue I linked to has been facing pretty serious performance issues of inferencing taking many minutes longer than it should when using an A770 to do SD-related inferencing, you might be better off waiting for intel's extension for tensorflow versions 1.2 and greater or something like that, so that when it's your turn to use it, Intel has already ironed out most of the major bugs within the software :)
-
Stable Diffusion with AMDGPU on WSL
tensorflow-stable-diffusion
-
Image2Image with AMD hardware?
# clone git clone https://github.com/divamgupta/stable-diffusion-tensorflow.git cd stable-diffusion-tensorflow # create venv python -m venv --prompt sdtf-windows-directml venv venv\Scripts\activate # verify venv is installed and activated pip --version # install deps pip install -r requirements.txt pip install tensorflow-directml-plugin # you should see DML debug output and at least one GPU python -c 'import tensorflow as tf; print(tf.config.list_physical_devices())' # run (show help) python text2image.py --help python text2image.py --prompt "a fluffy kitten"
-
I have no PC. Just DLed this for iOS
(Answers based on stable-diffusion open model) If you have a M1 processor: https://github.com/divamgupta/diffusionbee-stable-diffusion-ui (I've tested it) Or this claimed faster with TensorFlow: https://github.com/divamgupta/stable-diffusion-tensorflow
-
Keras Inpainting Colab
Added inpainting support to the original keras implementation: https://github.com/divamgupta/stable-diffusion-tensorflow Colab: https://colab.research.google.com/drive/1Bf-bNmAdtQhPcYNyC-guu0uTu9MYYfLu Github page: https://github.com/ShaunXZ/stable-diffusion-tensorflow
-
[N] Stable Diffusion reaches new record (with explanation + colab link)
I wonder if you mean 13 seconds per image because this implementation reports ~10s per image with mixed precision.
-
High-performance image generation using Stable Diffusion in KerasCV
On intel MacBookPro, CPU-only, the original one[1] using pytorch only utilized one core. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.
[1] https://github.com/CompVis/stable-diffusion/
[2] https://github.com/divamgupta/stable-diffusion-tensorflow/
[3] https://github.com/bes-dev/stable_diffusion.openvino/
diffusers
- StableDiffusionSafetyChecker
- 🧨 diffusers 0.24.0 is out with Kandinsky 3.0, IP Adapters, and others
-
What am I missing here? wheres the RND coming from?
I'm missing something about the random factor, from the sample code from https://github.com/huggingface/diffusers/blob/main/README.md
-
T2IAdapter+ControlNet at the same time
Hey people, I noticed that combining these two methods in a single forward pass increases the controllability of the generation quite a bit. I was kind of puzzled that sometimes ControlNet yielded better results than T2IAdapter for some cases, and sometimes it was the other way around, so I decided to test both at the same time, and results were quite nice. Some visuals and more motivation here: https://github.com/huggingface/diffusers/issues/5847 And it was already merged here: https://github.com/huggingface/diffusers/pull/5869
-
Won't you benchmark me?
Open Parti Prompts: The better way to evaluate diffusion models (repo)
-
kohya_ss error. How do I solve this?
You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
- Making a ControlNet inpaint for sdxl
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
For developers, TensorRT support also exists for the diffusers library via community pipelines. [1] It's limited, but if you're only supporting a subset of features, it can help.
In general, these insane speed boosts comes at the cost of bleeding edge features.
[1] https://github.com/huggingface/diffusers/blob/28e8d1f6ec82a6...
-
Mysterious weights when training UNET
I was training sdxl UNET base model, with the diffusers library, which was going great until around step 210k when the weights suddenly turned back to their original values and stayed that way. I also tried with the ema version, which didn't change at all. I also looked at the tensor's weight values directly which confirmed my suspicions.
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Merging LoRAs is essentially taking a weighted average of the LoRA adapter weights. It's more common in other UIs.
diffusers is working on a PR for it: https://github.com/huggingface/diffusers/pull/4473
What are some alternatives?
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion - A latent text-to-image diffusion model
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
keras-cv - Industry-strength Computer Vision workflows with Keras
invisible-watermark - python library for invisible image watermark (blind image watermark)
intel-extension-for-tensorflow - Intel® Extension for TensorFlow*
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.