stable-diffusion-tensorflow
InvokeAI
stable-diffusion-tensorflow | InvokeAI | |
---|---|---|
18 | 239 | |
1,569 | 21,337 | |
- | 1.4% | |
0.0 | 10.0 | |
9 months ago | 7 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-tensorflow
- Keras model SD or similar I can train from scratch?
-
Anyone attempted to convert stablediffusion tensorflow to tf lite?
was curious if someone attempted the conversion? I tried here https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58 but having some input shapes error. First time trying the conversion here, would love to run it on a edge tpu.
-
Stable Diffusion Tensorflow to TF Lite
Checking here is someone tried to convert the tensorflow diffusion model into a tf lite?https://github.com/divamgupta/stable-diffusion-tensorflow/issues/58
-
SD on intel arc?
Actually I was just on GitHub trying to submit issues related to me testing Intel's PyTorch and Tensorflow extensions when I saw this; it seems that someone has already ported SD over to the tensorflow framework and so you can probably start using intel's extension for tensorflow with it immediately; and according to this article you can use Intel's extension within WSL under windows as well. But unfortunately given how the guy whose issue I linked to has been facing pretty serious performance issues of inferencing taking many minutes longer than it should when using an A770 to do SD-related inferencing, you might be better off waiting for intel's extension for tensorflow versions 1.2 and greater or something like that, so that when it's your turn to use it, Intel has already ironed out most of the major bugs within the software :)
-
Stable Diffusion with AMDGPU on WSL
tensorflow-stable-diffusion
-
Image2Image with AMD hardware?
# clone git clone https://github.com/divamgupta/stable-diffusion-tensorflow.git cd stable-diffusion-tensorflow # create venv python -m venv --prompt sdtf-windows-directml venv venv\Scripts\activate # verify venv is installed and activated pip --version # install deps pip install -r requirements.txt pip install tensorflow-directml-plugin # you should see DML debug output and at least one GPU python -c 'import tensorflow as tf; print(tf.config.list_physical_devices())' # run (show help) python text2image.py --help python text2image.py --prompt "a fluffy kitten"
-
I have no PC. Just DLed this for iOS
(Answers based on stable-diffusion open model) If you have a M1 processor: https://github.com/divamgupta/diffusionbee-stable-diffusion-ui (I've tested it) Or this claimed faster with TensorFlow: https://github.com/divamgupta/stable-diffusion-tensorflow
-
Keras Inpainting Colab
Added inpainting support to the original keras implementation: https://github.com/divamgupta/stable-diffusion-tensorflow Colab: https://colab.research.google.com/drive/1Bf-bNmAdtQhPcYNyC-guu0uTu9MYYfLu Github page: https://github.com/ShaunXZ/stable-diffusion-tensorflow
-
[N] Stable Diffusion reaches new record (with explanation + colab link)
I wonder if you mean 13 seconds per image because this implementation reports ~10s per image with mixed precision.
-
High-performance image generation using Stable Diffusion in KerasCV
On intel MacBookPro, CPU-only, the original one[1] using pytorch only utilized one core. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.
[1] https://github.com/CompVis/stable-diffusion/
[2] https://github.com/divamgupta/stable-diffusion-tensorflow/
[3] https://github.com/bes-dev/stable_diffusion.openvino/
InvokeAI
-
Stable Diffusion 3
Probably not, since I have no idea what you're talking about. I've just been using the models that InvokeAI (2.3, I only just now saw there's a 3.0) downloads for me [0]. The SD1.5 one is as good as ever, but the SD2 model introduces artifacts on (many, but not all) faces and copyrighted characters.
[0] https://github.com/invoke-ai/InvokeAI
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I actually used the rocm/pytorch image you also linked.
I'm not sure what you're pointing to with your reference to the Fedora-based images. I'm quite happy with my NixOS install and really don't want to switch to anything else. And as long as I have the correct kernel module, my host OS really shouldn't matter to run any of the images.
And I'm sure it can be made to work with many base images, my point was just that the dependency management around pytorch was in a bad state, where it is extremely easy to break.
> Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
It does! At least for me. It is my PR after all ;)
-
Can some expert analyze a github repo and tell us if it's really safe or not?
The data being flagged is not in that github repo, it's fetched from elsewhere and I don't fancy spending time looking for it. The alert is for 'Sirefef!cfg' which has been reported as a false positive with a bunch of other stable diffusion projects (https://www.reddit.com/r/StableDiffusion/comments/101zjec/trojanwin32sirefefcfg_an_apparently_common_false/, https://www.reddit.com/r/StableDiffusion/comments/xmhukb/trojan_in_waifudiffusion_model_file/, https://github.com/invoke-ai/InvokeAI/issues/2773 )
-
What is the most effcient port of SD to mac?
I haven’t tried it recently, but InvokeAI runs on Mac. Invoke. I used to run on my MacBook, but have since gotten a Win laptop.
-
Easy Stable Diffusion XL in your device, offline
There are already a number of local, inference options that are (crucially) open-source, with more robust feature sets.
And if the defense here is "but Auto1111 and Comfy don't have as user-friendly a UI", that's also already covered. https://github.com/invoke-ai/InvokeAI
-
Ask HN: Selfhosted ChatGPT and Stable-diffusion like alternatives?
https://github.com/invoke-ai/InvokeAI should work on your machine. For LLM models, the smaller ones should run using llama.cpp, but I don't think you'll be happy comparing them to ChatGPT.
- 🚀 InvokeAI 3.4 now supports LCM & LCM-LoRAs and much more!
-
Best ai image generator without a nsfw filter?
Stable Diffusion. /r/stablediffusion There are many tutorials on how to set it up locally and use it. InvokeAI is the easiest way to set it up. https://github.com/invoke-ai/InvokeAI
-
What's the best stable diffusion client for base m1 MacBook air?
InvokeAI
- invoke-ai/InvokeAI
What are some alternatives?
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
ControlNet - Let us control diffusion models!
keras-cv - Industry-strength Computer Vision workflows with Keras
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
intel-extension-for-tensorflow - Intel® Extension for TensorFlow*
dreambooth-gui
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM