Vision-DiffMask
diffusers-interpret
Vision-DiffMask | diffusers-interpret | |
---|---|---|
2 | 15 | |
27 | 259 | |
- | - | |
4.3 | 10.0 | |
2 months ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Vision-DiffMask
-
[R] VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
Found relevant code at https://github.com/AngelosNal/Vision-DiffMask + all code implementations here
diffusers-interpret
- Stable Diffusion links from around September 29, 2022 that I collected for further processing
-
Diffusers-Interpret π€π§¨π΅οΈββοΈ - Model explainability for π€ Diffusers
Check the project at https://github.com/JoaoLages/diffusers-interpret
- Diffusers-Interpret v0.4.0 is out! Explainability for Stable Diffusion
-
Can we please make a general update on all the "most important" news/repos available?
For those who want to explore what the denoising process looks like, check out the [diffusers-interpret package](https://github.com/JoaoLages/diffusers-interpret)! You can generate a GIF like [this one](https://github.com/TomPham97/diffuser/blob/main/diffusion-process.gif?raw=true).
-
Commas, How do they work?!
If you have lots of RAM the diffusers-interpreter is an explainability tool that can show exactly how much each token is beings weighted and which part of the image it is influencing.
-
[D] Senior research scientist at GoogleAI, Negar Rostamzadeh: βCan't believe Stable Diffusion is out there for public use and that's considered as βokβ!!!β
github.com/JoaoLages/diffusers-interpret
- Model explainability for π€ Diffusers. Get explanations for your generated images with the latest stable diffusion model!
- [P] Model explainability for π€ Diffusers. Get explanations for your generated images with the latest stable diffusion model!
What are some alternatives?
transformers-interpret - Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.
stable-diffusion-webui - Stable Diffusion web UI
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
diffusion-ui - Frontend for deeplearning Image generation
stable-diffusion-webui-feature-showcase - Feature showcase for stable-diffusion-webui
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
stable-diffusion
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
diffusers - π€ Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
stable-diffusion - A latent text-to-image diffusion model
awesome-stable-diffusion - Curated list of awesome resources for the Stable Diffusion AI Model.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM