paint-with-words-sd
PyVaporation
paint-with-words-sd | PyVaporation | |
---|---|---|
13 | 4 | |
618 | 71 | |
- | - | |
5.2 | 1.3 | |
about 1 year ago | 12 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
paint-with-words-sd
-
paint with words with loras and multicontrolnet (will pay if needed)
I am refering to this btw: https://github.com/cloneofsimo/paint-with-words-sd
-
More control than ControlNet - code is out for MultiDiffusion Region Control, a prompt on each mask
This essentially supercharges the Nvidia eDiffi / SD paint-with-words attempts done for the same thing previously.
- "Segmentation" ControlNet preprocessor options
-
I figured out a way to apply different prompts to different sections of the image with regular Stable Diffusion models and it works pretty well.
There is stable diffusion paint with words GitHub which probably does exactly this, but no UI ever: https://github.com/cloneofsimo/paint-with-words-sd
-
What do you think will be added/created next?
personally i want to see the ediffi paint w/words stable diffusion extension https://github.com/cloneofsimo/paint-with-words-sd/commit/789419e3a34f43a1454df5a940020cfa531fbc63 that clonesofimo was working on before he stopped
- Will models have to be retrained for when this feature is eventually added into SD?
-
Paint with words (aka NVIDIA eDiff-I)
Just found there is a repo for an NVIDIA eDiff-I style img2img workflow for Stable Diffusion. For those unfamiliar, this lets you specify where parts of your text prompt should be placed in the image giving you much greater control on the composition e.g.
-
Different Models = Different prompts?
Paint-with-Words might eventually allow something along those lines, but it's a bit awkward to use now, and AFAIK you still get bleedthrough between multiple human subjects.
-
eDiff-I: A new Text-to-Image Diffusion Model with Ensemble of Expert Denoisers
someone attempted something like paint with words but I think Nvidia's version is better looking.
- Paint with words? What is next? Hope this gets to be a module in automatic 1111 soon.
PyVaporation
- Itβs a cool approach to provide interactive code examples to your open-source project in Jupiter. Check out how we used it to simplify onboarding for low-code audience. We would be thankfull for every star on the repo!
- Check our python framework for modelling and scaling membrane separation processes (mainly pervaporation). We would be thankfull for every star on the repo!
- We have developed a python package for complex membrane engineering tasks. The key aspect here was to validate the algorithms against real published reliable experimental data to increase the reliability of the package. We are trying to get more visibility, please consider starring the Project!
- Hey guys, please help us get visibility of our new open-source project for membrane scientists! We will be gratefull for each star here:
What are some alternatives?
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
discoart - πͺ© Create Disco Diffusion artworks in one line
openOutpaint - local offline javascript and html canvas outpainting gizmo for stable diffusion webUI API π
Kandinsky-2 - Kandinsky 2 β multilingual text2image latent diffusion model
LECO - Low-rank adaptation for Erasing COncepts from diffusion models.
proposal-symbols-as-weakmap-keys - Permit Symbols as keys in WeakMaps, entries in WeakSets and WeakRefs, and registered in FinalizationRegistries
Rerender_A_Video - [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation
doohickey - Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.
openOutpaint-webUI-extension - direct A1111 webUI extension for openOutpaint
awesome-stable-diffusion - Curated list of awesome resources for the Stable Diffusion AI Model.
daam - Diffusion attentive attribution maps for interpreting Stable Diffusion.
diffusers-interpret - Diffusers-Interpret π€π§¨π΅οΈββοΈ: Model explainability for π€ Diffusers. Get explanations for your generated images.