BLIP
taming-transformers
Our great sponsors
BLIP | taming-transformers | |
---|---|---|
14 | 35 | |
4,242 | 5,354 | |
5.5% | 3.9% | |
0.0 | 0.0 | |
7 months ago | about 1 month ago | |
Jupyter Notebook | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
BLIP
-
MetaCLIP – Meta AI Research
I suggest trying BLIP for this. I've had really good results from that.
https://github.com/salesforce/BLIP
I built a tiny Python CLI wrapper for it to make it easier to try: https://github.com/simonw/blip-caption
-
Is there a website where you can upload a photo and get the description in a paragraph?
You can download the source and run it yourself from here: https://github.com/salesforce/BLIP
-
Stable Diffusion v2-1-unCLIP model released
Then there's also BLIP (Bootstrapping Language-Image Pre-training).
-
GPT-4 shows emergent Theory of Mind on par with an adult. It scored in the 85+ percentile for a lot of major college exams. It can also do taxes and create functional websites from a simple drawing
Or BLIP
-
meme
GitHub - salesforce/BLIP: PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
-
Object Recognition for Photo Metadata
From what I understand, what's most important to you is having a model that's already trained on something, rather than the architecture. Yolo is probably fine, as would be some of the older ones. You should be able to find a model that's been pretrained on COCO - you can look at see what classes are included. I don't know if there are other broadly trained models available that will serve your purpose. What I'd do is just run your picture through a COCO trained object detection model and see if the annotations do what you want.
Though backing up a bit, there are also image captioning models that may better do what you want to do for organizing your photos. I'm not really familiar with any - though I did come across BLIP the other day but I haven't used it: https://github.com/salesforce/BLIP
This may be a better way to get at what you want
-
I have a problem with the "interrogate" function of Automatic1111's fork. Can someone help me?
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
-
Stable-diffusion in Nix
# Copy models as described in README cp ~/Downloads/model.ckpt . cp ~/Downloads/GFPGANv1.3.pth . # Clone other repos as mentioned in README mkdir repositories git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone https://github.com/salesforce/BLIP.git repositories/BLIP export NIXPKGS_ALLOW_UNFREE=1 nix-shell default.nix pip install torch --extra-index-url https://download.pytorch.org/whl/cu113 # Also from linux instructions. Can probably be added to default.nix python webui.py
-
My easy-to-install Windows GUI for Stable Diffusion is ready for a beta release! It supports img2img as well, various samplers, can run multiple scales per image automatically, and more!
Also check img2text (basically to prompt): https://github.com/salesforce/BLIP
- [D] Author Interview - BLIP: Bootstrapping Language-Image Pre-training (Video)
taming-transformers
-
Automatic1111 for Intel Arc (A380 Tested)
taming-transformers
-
[R] My simple Transformer audio encoder gives the same output for each timestep after the encoder
What’s your goal exactly? Are you trying to make a transformer based auto encoder of audio spectrograms? If so you should either start with either a proven ViT-based AE implementation (either a VAE or a VQ-GAN). But I don’t see why you necessarily need a ViT for this, if you’re working at a much smaller scale a convolutional architecture is plenty and much more amenable to beginners. See https://github.com/CompVis/taming-transformers for an example of a convolutional VQ GAN.
- Trying to make VqGAN+CLIP work again
-
im so lost
Command: "git" clone "https://github.com/CompVis/taming-transformers.git" "C:\AI\stable-diffusion-webui\repositories\taming-transformers"
-
Why is ChatGPT and other large language models not feasible to be used locally in consumer grade hardware while Stable Diffusion is?
See https://arxiv.org/abs/2012.09841 for prior work. SD authors swap out the Transformer and language modelling objective with a UNet diffusion objective. In general, the more inductive bias your model has, the more efficient it can be. ChatGPT runs purely on a Transformer architecture, which has far fewer priors than a CNN and requires far more parameters as a result. This may not be the case in the future.
-
1 or 2 Errors Installing Automatic1111 on Mac M1
There is definitely a cmd but I can't tell you. It's on GitHub https://github.com/CompVis/taming-transformers
-
Trying to Install InvokeAI and VectorQuantizer2 and taming modules but get error “zsh: parse error near `)’” How to fix? (MAC M1)
I wasn’t able to find a “taming” folder within the site-packages folder so I decided to look up how to get VectorQuantizer2 and taming.modules.vqvae.quantize and found this link: https://github.com/CompVis/taming-transformers/blob/master/taming/modules/vqvae/quantize.py I copied the raw contents and pasted that to the terminal and I got this error: “zsh: parse error near `)’” I’m not sure how to fix this so I can install VectorQuantizer2 so I can use InvokeAI. How do I fix this?
-
AI Is Coming For Commercial Art Jobs. Can It Be Stopped? (Greg Rutkowski quoted)
I say this to everyone... Even if SD and the model is legit and legal. Do not go around commercialising it's outputs or claiming ownership over them... and if you do the properly cite the source of the model and system along with it. In https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers and https://huggingface.co/CompVis/stable-diffusion-v1-4 there are citiations provided for you to use for a reason. I recommend you to use them.
-
Stable-diffusion in Nix
# Copy models as described in README cp ~/Downloads/model.ckpt . cp ~/Downloads/GFPGANv1.3.pth . # Clone other repos as mentioned in README mkdir repositories git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone https://github.com/salesforce/BLIP.git repositories/BLIP export NIXPKGS_ALLOW_UNFREE=1 nix-shell default.nix pip install torch --extra-index-url https://download.pytorch.org/whl/cu113 # Also from linux instructions. Can probably be added to default.nix python webui.py
-
[D] Where does VQ-GAN get its randomness from?
Code for https://arxiv.org/abs/2012.09841 found: https://compvis.github.io/taming-transformers/
What are some alternatives?
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
CodeFormer - [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
virtex - [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
nix-stable-diffusion - Nix-friendly fork of: Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
rtic-gcn-pytorch - Official PyTorch Implementation of RITC
stable-diffusion - A latent text-to-image diffusion model