autodistill-metaclip
aphantasia
autodistill-metaclip | aphantasia | |
---|---|---|
1 | 21 | |
16 | 769 | |
- | - | |
6.4 | 3.9 | |
5 months ago | 7 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autodistill-metaclip
-
MetaCLIP – Meta AI Research
I have been playing with MetaCLIP this afternoon and made https://github.com/autodistill/autodistill-metaclip as a pip installable version. The Facebook repository has some guidance but you have to pull the weights yourself, save them, etc.
My inference function (model.predict("image.png")) return an sv.Classifications object that you can load into supervision for processing (i.e. get top k) [1].
The paper [2] notes the following in terms of performance:
> In Table 4, we observe that MetaCLIP outperforms OpenAI CLIP on ImageNet and average accuracy across 26 tasks, for 3 model scales. With 400 million training data points on ViT-B/32, MetaCLIP outperforms CLIP by +2.1% on ImageNet and by +1.6% on average. On ViT-B/16, MetaCLIP outperforms CLIP by +2.5% on ImageNet and by +1.5% on average. On ViT-L/14, MetaCLIP outperforms CLIP by +0.7% on ImageNet and by +1.4% on average across the 26 tasks.
[1] https://github.com/autodistill/autodistill-metaclip
aphantasia
- An AI written, AI illustrated, human performed audio drama: Asteroid Annie and the Mushiblooms, Part 1 (Uncanny Robot Podcast)
- An audio drama written with NovelAI: Asteroid Annie and the Mushiblooms, Part 1
-
DeadSeanKennedy - Black Sheep Supreme [Breakbeat Techno House Electro Indie] [2022]
A new music video I made off of my latest release "Junglehaus" I used the Aphantasia library from eps696 (https://github.com/eps696/aphantasia) by feeding it the lyrics from the song and then editing together the best generations.
-
test
(Added Mar. 1, 2021) Aphantasia.ipynb - Colaboratory by eps696. Uses FFT (Fast Fourier Transform) from Lucent/Lucid to generate images. GitHub. Twitter reference. Example #1. Example #2.
- Batch render different prompts
-
Saw u/R_is_Ris post and inspired me to post my own. I call it Glow Forest for obvious reasons
Made with Illustrip by Vadim Epstein (https://github.com/eps696/aphantasia) and FL Studio for the background ambience
- Feeding in Politics: It Did Not Go As Planned
-
AI - A love story // AI-generated video about the future of AI // prompt -> GPT-J-6B -> Aphantasia
GPT-J - from the wizards at Eleuther.ai, via HuggingFace. - https://huggingface.co/EleutherAI/gpt-j-6B Aphantasia - from vadim epstein (eps696) - https://github.com/eps696/aphantasia
- Mario's Power-up (created with Aphantasia)
-
I heard a bird sing in the dark of December. A magical thing.
Over the weekend i've been toying around with the amazing Aphantasia, using quotes about the months of the year as prompts, this is definitely my favorite of the whole set.
What are some alternatives?
clip-interrogator - Image to prompt with BLIP and CLIP
stylegan - StyleGAN - Official TensorFlow Implementation
open_clip - An open source implementation of CLIP.
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
Colab-BigGANxCLIP
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
Queryable - Run OpenAI's CLIP model on iOS to search photos.
Text2LIVE - Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)
StyleCLIP - Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)