clip-glass
aphantasia
clip-glass | aphantasia | |
---|---|---|
13 | 21 | |
177 | 769 | |
- | - | |
0.0 | 3.9 | |
over 2 years ago | 7 months ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-glass
-
test
(Added Feb. 5, 2021) CLIP-GLaSS.ipynb - Colaboratory by Galatolo. Uses BigGAN (default) or StyleGAN to generate images. The GPT2 config is for image-to-text, not text-to-image. GitHub.
-
Image to text models
After a cursory search I found CLIP-GLaSS and CLIP-cap. I've used CLIP-GLaSS in a previous experiment, but found the captions for digital/CG images quite underwhelming. This is understandable since this is not what the model was trained on, but still I'd like to use a better model.
-
[R] end-to-end image captioning
CLIP-GLaSS
- What CLIP-GLaSS thinks Ancient Egyptian computers would look like
-
Texttoimage 3 Images For Text Photo Of Donald
The images were generated using this notebook.
- CLIP-GLaSS prompt: "Screenshot of a video game from the 1930s"
-
[P] List of sites/programs/projects that use OpenAI's CLIP neural network for steering image/video creation to match a text description
The CLIP-GLaSS project has image-to-text functionality (I haven't tried it.)
-
For educational purposes: Text-to-image (3 runs with no cherry-picking, 6 images each) for text "Photo of a Lamborghini painted purple and red" generated using CLIP-GLaSS. config=StyleGAN2_car_d. save_each=50. generations=1000
Link to notebook.
-
Sharing CLIP magic based on OpenAI's blog post via a bit more accessible YT medium. Lmk what u think 🙈 ❤️
CLIP-GLaSS
-
[R] [P] Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search. Link to code and Google Colab notebook for project CLIP-GLaSS is in a comment.
Github for CLIP-GLaSS is here.
aphantasia
- An AI written, AI illustrated, human performed audio drama: Asteroid Annie and the Mushiblooms, Part 1 (Uncanny Robot Podcast)
- An audio drama written with NovelAI: Asteroid Annie and the Mushiblooms, Part 1
-
DeadSeanKennedy - Black Sheep Supreme [Breakbeat Techno House Electro Indie] [2022]
A new music video I made off of my latest release "Junglehaus" I used the Aphantasia library from eps696 (https://github.com/eps696/aphantasia) by feeding it the lyrics from the song and then editing together the best generations.
-
test
(Added Mar. 1, 2021) Aphantasia.ipynb - Colaboratory by eps696. Uses FFT (Fast Fourier Transform) from Lucent/Lucid to generate images. GitHub. Twitter reference. Example #1. Example #2.
- Batch render different prompts
-
Saw u/R_is_Ris post and inspired me to post my own. I call it Glow Forest for obvious reasons
Made with Illustrip by Vadim Epstein (https://github.com/eps696/aphantasia) and FL Studio for the background ambience
- Feeding in Politics: It Did Not Go As Planned
-
AI - A love story // AI-generated video about the future of AI // prompt -> GPT-J-6B -> Aphantasia
GPT-J - from the wizards at Eleuther.ai, via HuggingFace. - https://huggingface.co/EleutherAI/gpt-j-6B Aphantasia - from vadim epstein (eps696) - https://github.com/eps696/aphantasia
- Mario's Power-up (created with Aphantasia)
-
I heard a bird sing in the dark of December. A magical thing.
Over the weekend i've been toying around with the amazing Aphantasia, using quotes about the months of the year as prompts, this is definitely my favorite of the whole set.
What are some alternatives?
a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
stylegan - StyleGAN - Official TensorFlow Implementation
meshed-memory-transformer - Meshed-Memory Transformer for Image Captioning. CVPR 2020
DeOldify - A Deep Learning based project for colorizing and restoring old images (and video!)
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
stylized-neural-painting - Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.
Colab-BigGANxCLIP
StyleCLIP - Using CLIP and StyleGAN to generate faces from prompts.
Queryable - Run OpenAI's CLIP model on iOS to search photos.
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
StyleCLIP - Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)