open_clip
Real-ESRGAN
Our great sponsors
open_clip | Real-ESRGAN | |
---|---|---|
28 | 131 | |
8,452 | 26,111 | |
8.2% | - | |
8.2 | 2.7 | |
17 days ago | 17 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_clip
-
A History of CLIP Model Training Data Advances
While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.
Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.
I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.
So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.
Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?
It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.
-
MetaCLIP – Meta AI Research
https://github.com/mlfoundations/open_clip/blob/main/docs/op...
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Is Nicholas Renotte a good guide for a person who knows nothing about ML?
also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
-
Generate Image from Vector Embedding
It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
-
What's up in the Python community? – April 2023
https://replicate.com/pharmapsychotic/clip-interrogator
using:
cfg.apply_low_vram_defaults()
interrogate_fast()
I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)
I'm desperately looking for something more modest and light.
-
Low accuracy on my CNN model.
A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
- Looking for OpenAI CLIP alternative
Real-ESRGAN
-
AI-Powered Nvidia RTX Video HDR Transforms Standard Video into HDR Video
It's not exactly what you're after, as it's anime specific and you need to process the video yourself (eg disassemble to frames, run the upscaler, then assemble back to a movie file), but Real-ESRGAN is really good:
https://github.com/xinntao/Real-ESRGAN/
It's pretty brilliant for cleaning up very old, low resolution anime.
-
Photorealistic Video Generation with Diffusion Models
Just a note you can run upscaling on your home desktop with Real-ESRGAN:
https://github.com/xinntao/Real-ESRGAN
- What software to use for upscaling anime edits
-
What neural net for SISR?
Maybe Real-ESRGAN is a good fit? Even tho it's a couple of years old
- Cant make concurrent calls to Model
-
Outis my beloved
I'm glad you noticed! I upscaled the icon from the wiki using Real-ESRGAN's 4xplus anime model, then photoshopped out the text. Worked far better than waifu2x.
-
ComicMerge (Beta testing version - SafeTensors)
A: Try using High-res Fix and R-ESRGAN 4x+ Anime6B as upscaler
-
Is there any way to upscale local files permanently using Nvidia's RT VSR?
Maybe try this one https://github.com/xinntao/Real-ESRGAN it may work even better.
-
YOASOBI Idol [3840 x 2160]
Screenshotted from the official music video, upscaled to 4k using a state of the art ML model.
-
Compilation of (almost) all end of chapter panels
Do you happen to remember which chapter has that "scene"? You could also try to enhance it yourself, I did it using Real-ESRGAN, which is really easy to use.
What are some alternatives?
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
ESRGAN - ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
SwinIR - SwinIR: Image Restoration Using Swin Transformer (official repository)
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
BSRGAN - Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
waifu2x - Image Super-Resolution for Anime-Style Art
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them
Real-ESRGAN-colab - A Real-ESRGAN model trained on a custom dataset