stable-diffusion
By Sygil-Dev
Stable-textual-inversion_win | stable-diffusion | |
---|---|---|
15 | 111 | |
240 | 1,749 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | GNU Affero General Public License v3.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-textual-inversion_win
Posts with mentions or reviews of Stable-textual-inversion_win.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-09-26.
-
Using DreamBooth on SD on a 3090 w/24gb VRAM (about 1.5 hrs to train)
Would it be possible for you to add this new code in the "regular" textual inversion code? like in this one : https://github.com/nicolai256/Stable-textual-inversion_win - I'm using a 3090, batch size of 3, workers 10, size 384 - works pretty good but if your modification could reduce the VRAM, it could go faster.
-
Question About Running Local Textual Inversion
Rinongal and nicolai256 versions, the latter of which is also the one explained in Nerdy Rodent's youtube video https://www.youtube.com/watch?v=WsDykBTjo20, work but they also have an issue of lacking editability in comparison to one made by huggingface's collab which is followed up in a very long issue on Rinongal's Github. You can add accumulate_grad_batches: 4 to the end of the finetune files like shown in Nerdy Rodent's video at this time stamp to try to alleviate this issue, but the quality isn't as good as one made in the online collab.
- NMKD Stable Diffusion GUI 1.4.0 is here! Now with support for inpainting, HuggingFace concepts, VRAM optimizations, and the model no longer needs to be reloaded for every prompt. Full changelog in comments!
- Useful link
-
I like Disco Elysium so have been trying some Textual Inversion training + some internal prompt business to replicate the look of the portraits.
the prompt for this one was "a portrait of beautiful young \, painting by Michael Garmash and Kilian Eng, in the style of &",* after training * with pictures of my GF and & with all the Disco Elysium portrait pictures. using the stuff here: https://github.com/nicolai256/Stable-textual-inversion_win, also, thank you u/ExponentialCookie.
- My Stable Diffusion GUI update 1.3.0 is out now! Includes optimizedSD code, upscaling and face restoration, seamless mode, and a ton of fixes!
-
Textual Inversion Help
Here is an alternate fork of the repo you talked about: https://github.com/nicolai256/Stable-textual-inversion_win
-
Is there any info on how to finetune without using textual inversion?
From my understanding the only finetuning people are doing currently is using textual inversion (this https://github.com/nicolai256/Stable-textual-inversion_win/ and this https://www.reddit.com/r/StableDiffusion/comments/wvzr7s/tutorial_fine_tuning_stable_diffusion_using_only/), but this seems very different from the real finetuning Emad was talking about, and what others (like NovelAI) are doing?
-
A user did an Arvalis / RJ Palmer fine-tune (textual inversion)
Cred. to florishdiffusion for showing these gens. I'm not knowledgeable on how to use text inversion but it is possible to do in Free Colab from this source
-
Self Portrait, using SD and textual inversion trained on images of myself
what is your --init_word? also what is your prompt for generation? i have doing person training for 6 day and not getting a good results damn! i use https://github.com/nicolai256/Stable-textual-inversion_win
stable-diffusion
Posts with mentions or reviews of stable-diffusion.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-09-12.
-
PSA: You can run your GPU's at 80% power and get the same rendering speeds while saving heat/fan noise/electricity
use or update this one : https://github.com/hlky/stable-diffusion it has all the samplers, and if you want perfect faces, try k_euler_a
-
"a software developer after fixing a bug", by DALL-E 2
try this one https://github.com/hlky/stable-diffusion you need at least a 1050 to run it tho
- Which is the best fork out there ?
-
At the end of my rope on hlky fork, can anyone recommend any alternative GUI forks I could switch to?
https://github.com/hlky/stable-diffusion/issues/153 With 36 comments and tons of before and after comparisons, which are now deleted
-
CUDA memory error with hlky repo, (4GB Nvidia) - any ideas?
I wanted to try hlky version (https://github.com/hlky/stable-diffusion) , due to the WebUI and integration with upscaling models. It should also have the option to be optimized for low VRAM. To avoid getting a green square I have to add the parameters "--precision full --no-half". When I run a prompt, even with the smallest image size, I immediately get a CUDA memory error. Interestingly, without these parameters there isn't any memory error (but, of course, the result is a green square)
-
Fallout 5: Toronto (created with AI)
Made using https://github.com/hlky/stable-diffusion
-
Just released a Colab notebook that combines Craiyon+Stable Diffusion
Any chance to get this integrated into something like hlky's web ui?
-
AI Tekst til bilde: Elg og stavkirke med nordlys over Norsk flagg i bakgrunnen [OC] Mer detaljer i posten
Linux Guide her. Jeg har også Linux, men jeg valgte å sette det opp på Windows boksen min fordi driverne til Nvidia kortet på Linux ikke er helt sammarbeidsvillig når det kommer til å justere viftene etter sensorene i kortet (så jeg må sette det manuelt).
-
Using GFPGAN for only the eyes?
I'm seeing GFPGAN essentially remove all texture from faces, and I only want to use it on the eyes. Any thoughts on how to do this? I am using hlky/stable-diffusion now but I have no issues running a different repo/fork if needed and using command line.
- What's the best install of Stable Diffusion right now?
What are some alternatives?
When comparing Stable-textual-inversion_win and stable-diffusion you can also consider the following projects:
stable-diffusion
diffusers-uncensored - Uncensored fork of diffusers
textual_inversion
stable-diffusion-krita-plugin
stable-diffusion - A latent text-to-image diffusion model
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
sd-enable-textual-inversion - Copy these files to your stable-diffusion to enable text-inversion
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
stable_diffusion.openvino
stylegan2-projecting-images - Projecting images to latent space with StyleGAN2.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
Stable-textual-inversion_win vs stable-diffusion
stable-diffusion vs diffusers-uncensored
Stable-textual-inversion_win vs textual_inversion
stable-diffusion vs stable-diffusion-krita-plugin
Stable-textual-inversion_win vs stable-diffusion
stable-diffusion vs instant-ngp
Stable-textual-inversion_win vs sd-enable-textual-inversion
stable-diffusion vs stable-diffusion
Stable-textual-inversion_win vs bitsandbytes
stable-diffusion vs stable_diffusion.openvino
Stable-textual-inversion_win vs stylegan2-projecting-images
stable-diffusion vs stable-diffusion-webui