clip-glass VS meshed-memory-transformer

Compare clip-glass vs meshed-memory-transformer and see what are their differences.

clip-glass

Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search" (by galatolofederico)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
clip-glass meshed-memory-transformer
13 2
177 497
- 2.8%
0.0 0.0
over 2 years ago over 1 year ago
Python Python
GNU General Public License v3.0 only BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

clip-glass

Posts with mentions or reviews of clip-glass. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-03.

meshed-memory-transformer

Posts with mentions or reviews of meshed-memory-transformer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-03.
  • [D] Data transfer(image features) between different models in separate docker containers
    2 projects | /r/MachineLearning | 3 Jun 2021
  • [R] end-to-end image captioning
    3 projects | /r/MachineLearning | 25 Feb 2021
    I could use some up-to-date models (e.g, this one: https://github.com/aimagelab/meshed-memory-transformer), but all those I looked into require pre-processing step of features/bounding-boxes generation. The problem is that I can't use an off-the shelf bounding-box extraction model as it would not perform well on the dataset I have (images are not like COCO at all). So I was wondering if there is a relatively up-to-date architecture that I can use that will not require this processing step. That is, an implementation that requires only inputs (images) and outputs (sentences).

What are some alternatives?

When comparing clip-glass and meshed-memory-transformer you can also consider the following projects:

a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning

deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun

stargan-v2 - StarGAN v2 - Official PyTorch Implementation (CVPR 2020)

aphantasia - CLIP + FFT/DWT/RGB = text to image/video

BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

stylized-neural-painting - Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.

catr - Image Captioning Using Transformer

StyleCLIP - Using CLIP and StyleGAN to generate faces from prompts.

py-bottom-up-attention - PyTorch bottom-up attention with Detectron2

CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.

StyleCLIP - Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)