diffusers
invisible-watermark
Our great sponsors
diffusers | invisible-watermark | |
---|---|---|
266 | 20 | |
22,543 | 1,447 | |
6.3% | 4.7% | |
9.9 | 3.2 | |
3 days ago | 7 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
diffusers
- StableDiffusionSafetyChecker
- 🧨 diffusers 0.24.0 is out with Kandinsky 3.0, IP Adapters, and others
-
What am I missing here? wheres the RND coming from?
I'm missing something about the random factor, from the sample code from https://github.com/huggingface/diffusers/blob/main/README.md
-
T2IAdapter+ControlNet at the same time
Hey people, I noticed that combining these two methods in a single forward pass increases the controllability of the generation quite a bit. I was kind of puzzled that sometimes ControlNet yielded better results than T2IAdapter for some cases, and sometimes it was the other way around, so I decided to test both at the same time, and results were quite nice. Some visuals and more motivation here: https://github.com/huggingface/diffusers/issues/5847 And it was already merged here: https://github.com/huggingface/diffusers/pull/5869
-
Won't you benchmark me?
Open Parti Prompts: The better way to evaluate diffusion models (repo)
-
kohya_ss error. How do I solve this?
You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
- Making a ControlNet inpaint for sdxl
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
For developers, TensorRT support also exists for the diffusers library via community pipelines. [1] It's limited, but if you're only supporting a subset of features, it can help.
In general, these insane speed boosts comes at the cost of bleeding edge features.
[1] https://github.com/huggingface/diffusers/blob/28e8d1f6ec82a6...
-
Mysterious weights when training UNET
I was training sdxl UNET base model, with the diffusers library, which was going great until around step 210k when the weights suddenly turned back to their original values and stayed that way. I also tried with the ema version, which didn't change at all. I also looked at the tensor's weight values directly which confirmed my suspicions.
-
I Made Stable Diffusion XL Smarter by Finetuning It on Bad AI-Generated Images
Merging LoRAs is essentially taking a weighted average of the LoRA adapter weights. It's more common in other UIs.
diffusers is working on a PR for it: https://github.com/huggingface/diffusers/pull/4473
invisible-watermark
-
Why & How to check Invisible Watermark
I'm not sure your online tool is working. I tried it with the watermarked example image from https://github.com/ShieldMnt/invisible-watermark, and your tool returned that it did not detect a watermark:
-
The AI bots have arrived at r/programming...
The public availability and quality of LLMs and stable diffusion have been an unprecedented disaster for spam mitigation largely because there is no effective way to determine if this content was created and posted by a human being. Particularly with text content, the amount of information present is so small that I don't believe there is a way to definitively analyze it and concretely say whether or not it was generated by an LLM. The only potential way to do so that I can think of would be to check every comment against the output of each LLM service provider, but that's a futile endeavor because you can go back to inserting typos and substitutions, reorder the text or omit some of it, mash multiple outputs together, or even self-host an LLM and skip all the bullshit from the start. At least the images and videos being created by stable diffusion can be watermarked reasonably well.
-
SD Watermark checker. How do i check if image is generative?
i found an article but i don't understand it... is there any video tutorial of anything?
-
MidJourney blocked content it generated as sexually explicit...
Creating invisible watermark encoder (see https://github.com/ShieldMnt/invisible-watermark)...
-
How would an AI art company like Midjourney know you were selling imagery you created using their platform?
Tools to add this kind of watermarking are publicly available or could be reimplemented by in house developers if they don't like FOSS licenses.
-
New Art Platforms for Artists and the death of old ones?
Most major AI generators embed invisible watermarks into the images so that they can detect them and avoid training on generated imagery later. I know Stable Diffusion uses this python library to do it: https://github.com/ShieldMnt/invisible-watermark I haven't bothered to look up others but they have similar steps.
- [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models?
-
Just saw this Post regarding new Anti-AI Software on Linkedin. What are your opinions on this? Can this even work?
It uses the same library as Stable Diffusion (https://github.com/ShieldMnt/invisible-watermark) without giving credit in its github repository which does not even contain the sources of its 3 lines of code. This watermark doesn't protect anything, it would be necessary that the robots that retrieve the images from the internet make the effort to read the watermark to not add them in their dataset (best case scenario, totally utopian). The repository is suspicious and could be a way to install malware.
- Stable diffusion uses https://github.com/ShieldMnt/invisible-watermark by default unless you check "Do not add watermark to images" in settings
-
Looks like Stable Diffusion 2.0 was released, with some anticipated features
"This script incorporates an invisible watermarking of the outputs, to help viewers identify the images as machine-generated."
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
onnx - Open standard for machine learning interoperability
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion
sd-webui-additional-networks
stable-diffusion