rtic-gcn-pytorch
OFA
rtic-gcn-pytorch | OFA | |
---|---|---|
2 | 3 | |
20 | 2,331 | |
- | 1.0% | |
0.0 | 2.8 | |
over 2 years ago | 15 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rtic-gcn-pytorch
OFA
-
[R][P] Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework + VQA Hugging Face Spaces Demo
github: https://github.com/OFA-Sys/OFA
-
OFA: model that does text-to-image as well as other tasks
From this:
- [R] Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. Shocking performance in text-to-image synthesis and open-domain tasks.
What are some alternatives?
clean-code-dotnet - :bathtub: Clean Code concepts and tools adapted for .NET
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
clean-code-javascript - :bathtub: Clean Code concepts adapted for JavaScript
ONE-PEACE - A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
MAGIC - Language Models Can See: Plugging Visual Controls in Text Generation
UPop - [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.