BLIP
Dreambooth-Stable-Diffusion
Our great sponsors
BLIP | Dreambooth-Stable-Diffusion | |
---|---|---|
14 | 100 | |
4,242 | 3,162 | |
5.5% | - | |
0.0 | 6.8 | |
7 months ago | 4 months ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
BLIP
-
MetaCLIP – Meta AI Research
I suggest trying BLIP for this. I've had really good results from that.
https://github.com/salesforce/BLIP
I built a tiny Python CLI wrapper for it to make it easier to try: https://github.com/simonw/blip-caption
-
Is there a website where you can upload a photo and get the description in a paragraph?
You can download the source and run it yourself from here: https://github.com/salesforce/BLIP
-
Stable Diffusion v2-1-unCLIP model released
Then there's also BLIP (Bootstrapping Language-Image Pre-training).
-
GPT-4 shows emergent Theory of Mind on par with an adult. It scored in the 85+ percentile for a lot of major college exams. It can also do taxes and create functional websites from a simple drawing
Or BLIP
-
meme
GitHub - salesforce/BLIP: PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
-
Object Recognition for Photo Metadata
From what I understand, what's most important to you is having a model that's already trained on something, rather than the architecture. Yolo is probably fine, as would be some of the older ones. You should be able to find a model that's been pretrained on COCO - you can look at see what classes are included. I don't know if there are other broadly trained models available that will serve your purpose. What I'd do is just run your picture through a COCO trained object detection model and see if the annotations do what you want.
Though backing up a bit, there are also image captioning models that may better do what you want to do for organizing your photos. I'm not really familiar with any - though I did come across BLIP the other day but I haven't used it: https://github.com/salesforce/BLIP
This may be a better way to get at what you want
-
I have a problem with the "interrogate" function of Automatic1111's fork. Can someone help me?
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
-
Stable-diffusion in Nix
# Copy models as described in README cp ~/Downloads/model.ckpt . cp ~/Downloads/GFPGANv1.3.pth . # Clone other repos as mentioned in README mkdir repositories git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone https://github.com/salesforce/BLIP.git repositories/BLIP export NIXPKGS_ALLOW_UNFREE=1 nix-shell default.nix pip install torch --extra-index-url https://download.pytorch.org/whl/cu113 # Also from linux instructions. Can probably be added to default.nix python webui.py
-
My easy-to-install Windows GUI for Stable Diffusion is ready for a beta release! It supports img2img as well, various samplers, can run multiple scales per image automatically, and more!
Also check img2text (basically to prompt): https://github.com/salesforce/BLIP
- [D] Author Interview - BLIP: Bootstrapping Language-Image Pre-training (Video)
Dreambooth-Stable-Diffusion
-
Will there be comprehensive tutorials for fine-tuning SD XL when it comes out?
Tons of stuff here, no? https://github.com/JoePenna/Dreambooth-Stable-Diffusion/
-
Useful Links
Joe Penna's Dreambooth (Tutorial|24GB) Most popular DB repo with great results.
-
Dreambooth / Custom Training / Model - what's the state of the art?
1) The https://github.com/JoePenna/Dreambooth-Stable-Diffusion instructions say to use the 1.5 checkpoints - is that the latest? Can I use the 2+ models or?
-
My Experience with Training Real-Person Models: A Summary
I quickly turned to the second library, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, because its readme was very encouraging, and its results were the best. Unfortunately, to use it on Colab, you need to sign up for Colab Pro to use advanced GPUs (at least 24GB of VRAM), and training a model requires at least 14 compute units. As a poor Chinese person, I could only buy Colab Pro from a proxy. The results from JoePenna/Dreambooth-Stable-Diffusion were fantastic, and the preparation was straightforward, requiring only <=20 512*512 photos without writing captions. I used it to create many beautiful photos.
- I Used Stable Diffusion and Dreambooth to Create an Art Portrait of My Dog
- training
-
Training a model on Iwanaga Kotoko (from in/spectre), which step do you guys think the model is at its best?
I've found EveryDream to be brilliant and have switched from JoePenna's Dreambooth because I've found I get better results so long as I provide good captions for all the images, even if preparing the dataset takes 3x as long (took me 2 hours to crop and label the 54 images).
-
Dreambooth training results for face, object and style datasets with various prior regularization settings.
From what I know you can train with whatever size you want. But you need software that will support it. For example, ShivamShrirao/diffusers repo seems to allow a change of dimension. Also, you need HW that would support the training, because bigger images need more VRAM, for example,Joe Penna repo is using ~23GB with 512x512px so probably it's not a valid option. But the ShivamShrirao repo has optimizations that allow to run it with less VRAM.
-
Starting to get quite good results with Dreambooth. What do you think? (Follow @RokStrnisa on Twitter for more.)
This is a good starting place: https://github.com/JoePenna/Dreambooth-Stable-Diffusion
- I'm a N00b with training stuff. Trying to get runpod with Dreambooth training some images (80 total) and I'm getting this error. Help?
What are some alternatives?
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Dreambooth-SD-optimized - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
Stable-Diffusion-Regularization-Images - For use with fine-tuning, especially the current implementation of "Dreambooth".
CodeFormer - [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
A1111-Web-UI-Installer - Complete installer for Automatic1111's infamous Stable Diffusion WebUI
virtex - [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations
civitai - A repository of models, textual inversions, and more
nix-stable-diffusion - Nix-friendly fork of: Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.