OFA VS ONE-PEACE

Compare OFA vs ONE-PEACE and see what are their differences.

OFA

Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework (by OFA-Sys)

ONE-PEACE

A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities (by OFA-Sys)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
OFA ONE-PEACE
3 2
2,323 838
2.4% 6.4%
2.8 8.6
3 days ago 5 months ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ONE-PEACE

Posts with mentions or reviews of ONE-PEACE. We have used some of these posts to build our list of alternatives and similar projects.

What are some alternatives?

When comparing OFA and ONE-PEACE you can also consider the following projects:

ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper

Multimodal-GPT - Multimodal-GPT

GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"

ALPRO - Align and Prompt: Video-and-Language Pre-training with Entity Prompts

MAGIC - Language Models Can See: Plugging Visual Controls in Text Generation

EVA - EVA Series: Visual Representation Fantasies from BAAI

UPop - [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.

unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities