stargan-v2
meshed-memory-transformer
Our great sponsors
stargan-v2 | meshed-memory-transformer | |
---|---|---|
1 | 2 | |
3,414 | 497 | |
0.7% | 2.8% | |
0.0 | 0.0 | |
12 months ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stargan-v2
-
How to Run stargan2 on Google Colab
This version of StarGAN2 (coined as 'Post-modern Style Transfer') is intended mostly for fellow artists, who rarely look at scientific metrics, but rather need a working creative tool. At least, this is what I use nearly daily myself. Here are few pieces, made with it: Terminal Blink, Occurro, etc. Tested on Pytorch 1.4-1.8. Sequence-to-video conversions require FFMPEG. For more explicit details refer to the original implementation.
meshed-memory-transformer
- [D] Data transfer(image features) between different models in separate docker containers
-
[R] end-to-end image captioning
I could use some up-to-date models (e.g, this one: https://github.com/aimagelab/meshed-memory-transformer), but all those I looked into require pre-processing step of features/bounding-boxes generation. The problem is that I can't use an off-the shelf bounding-box extraction model as it would not perform well on the dataset I have (images are not like COCO at all). So I was wondering if there is a relatively up-to-date architecture that I can use that will not require this processing step. That is, an implementation that requires only inputs (images) and outputs (sentences).
What are some alternatives?
AvatarMe - Public repository for the CVPR 2020 paper AvatarMe and the TPAMI 2021 AvatarMe++
a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
ALAE - [CVPR2020] Adversarial Latent Autoencoders
clip-glass - Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search"
stargan2 - StarGAN2 for practice
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
VIBE - Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"
catr - Image Captioning Using Transformer
py-bottom-up-attention - PyTorch bottom-up attention with Detectron2