textual_inversion
InvokeAI
textual_inversion | InvokeAI | |
---|---|---|
30 | 239 | |
2,743 | 21,337 | |
- | 1.4% | |
0.8 | 10.0 | |
about 1 year ago | 4 days ago | |
Jupyter Notebook | TypeScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
textual_inversion
- FLiP Stack Weekly for 06 February 2023
- Loading textual inversion embeddings in vanilla SD library?
- Embeddings without using AUTO1111
-
How to use embeddings with PyTorch
Checking out https://github.com/rinongal/textual_inversion, which has some possibly informative examples and scripts.
- Textual Inversion
- Advice on Automatic1111 textual inversion tuning?
-
Hi. Is training my own textual inversion feasible on one 1070? &how long does it take?
I think currently you will need about 20GB VRam..., options are: 1. https://github.com/rinongal/textual_inversion - localy
-
Question About Running Local Textual Inversion
Rinongal and nicolai256 versions, the latter of which is also the one explained in Nerdy Rodent's youtube video https://www.youtube.com/watch?v=WsDykBTjo20, work but they also have an issue of lacking editability in comparison to one made by huggingface's collab which is followed up in a very long issue on Rinongal's Github. You can add accumulate_grad_batches: 4 to the end of the finetune files like shown in Nerdy Rodent's video at this time stamp to try to alleviate this issue, but the quality isn't as good as one made in the online collab.
-
How close are we to full movie generation from a technical standpoint?
That may mostly solve that but it’s too early right now: https://github.com/rinongal/textual_inversion
For fun I tried to make an entire animated music video but it took over one week of processing and basically fell apart coherently by 30 seconds so just did one third:
https://youtu.be/f3GfUKJBUYA
-
Easy Textual Inversion tutorial. How To Train Stable Diffusion With Your Own Art.
The huggingface models don't work with the local stable diffusion, only the models trained locally with this repo https://github.com/rinongal/textual_inversion can be installed, at least for now.
InvokeAI
-
Stable Diffusion 3
Probably not, since I have no idea what you're talking about. I've just been using the models that InvokeAI (2.3, I only just now saw there's a 3.0) downloads for me [0]. The SD1.5 one is as good as ever, but the SD2 model introduces artifacts on (many, but not all) faces and copyrighted characters.
[0] https://github.com/invoke-ai/InvokeAI
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I actually used the rocm/pytorch image you also linked.
I'm not sure what you're pointing to with your reference to the Fedora-based images. I'm quite happy with my NixOS install and really don't want to switch to anything else. And as long as I have the correct kernel module, my host OS really shouldn't matter to run any of the images.
And I'm sure it can be made to work with many base images, my point was just that the dependency management around pytorch was in a bad state, where it is extremely easy to break.
> Anyways, hopefully this PR fixes the immediate issue: https://github.com/invoke-ai/InvokeAI/pull/5714/files
It does! At least for me. It is my PR after all ;)
-
Can some expert analyze a github repo and tell us if it's really safe or not?
The data being flagged is not in that github repo, it's fetched from elsewhere and I don't fancy spending time looking for it. The alert is for 'Sirefef!cfg' which has been reported as a false positive with a bunch of other stable diffusion projects (https://www.reddit.com/r/StableDiffusion/comments/101zjec/trojanwin32sirefefcfg_an_apparently_common_false/, https://www.reddit.com/r/StableDiffusion/comments/xmhukb/trojan_in_waifudiffusion_model_file/, https://github.com/invoke-ai/InvokeAI/issues/2773 )
-
What is the most effcient port of SD to mac?
I haven’t tried it recently, but InvokeAI runs on Mac. Invoke. I used to run on my MacBook, but have since gotten a Win laptop.
-
Easy Stable Diffusion XL in your device, offline
There are already a number of local, inference options that are (crucially) open-source, with more robust feature sets.
And if the defense here is "but Auto1111 and Comfy don't have as user-friendly a UI", that's also already covered. https://github.com/invoke-ai/InvokeAI
-
Ask HN: Selfhosted ChatGPT and Stable-diffusion like alternatives?
https://github.com/invoke-ai/InvokeAI should work on your machine. For LLM models, the smaller ones should run using llama.cpp, but I don't think you'll be happy comparing them to ChatGPT.
- 🚀 InvokeAI 3.4 now supports LCM & LCM-LoRAs and much more!
-
Best ai image generator without a nsfw filter?
Stable Diffusion. /r/stablediffusion There are many tutorials on how to set it up locally and use it. InvokeAI is the easiest way to set it up. https://github.com/invoke-ai/InvokeAI
-
What's the best stable diffusion client for base m1 MacBook air?
InvokeAI
- invoke-ai/InvokeAI
What are some alternatives?
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion-webui - Stable Diffusion web UI
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
stable-diffusion
ControlNet - Let us control diffusion models!
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
VideoX - VideoX: a collection of video cross-modal models
dreambooth-gui
Stable-textual-inversion_win
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM