fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python. (by pytorch)

Fairseq Alternatives

Similar projects and alternatives to fairseq

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better fairseq alternative or higher similarity.

Suggest an alternative to fairseq

Reviews and mentions

Posts with mentions or reviews of fairseq. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-05-01.
  • BART: Denoising Seq2Seq Pre-training for NLG (explained)
    news.ycombinator.com | 2021-10-08
  • [R] Facebook & CMU’s Zero-Shot VideoCLIP Outperforms Fully-Supervised SOTA Methods for Video-Text Understanding
    Code for https://arxiv.org/abs/2109.14084 found: https://github.com/pytorch/fairseq/tree/main/examples/MMPT
  • Fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit
    news.ycombinator.com | 2021-09-16
  • Facebook AI Introduces GSLM (Generative Spoken Language Model), A Textless NLP Model That Breaks Free Completely of The Dependence on Text for Training
    4 Min Read | GLSM Paper | Expressive Resynthesis Paper | Prosody-Aware GSLM Paper | Code and Pretrained Models
  • HuBERT: Speech representations for recognition & generation (upgraded Wav2Vec by Facebook)
  • Wav2vec Unsupervised (wav2vec-U) is a framework for building speech recognition
    news.ycombinator.com | 2021-05-21
  • [HELP] I want to translate from Spanish to English. Is there any free api for this with better privacy?
    reddit.com/r/api | 2021-05-18
    GitHub link https://github.com/pytorch/fairseq/tree/master/examples/m2m_100
  • I’ve struck AI gold. Finally, an alternative to AID and indeed all of OpenAI that I haven’t heard anyone mention yet and is IMO just as good as Dragon and ready to use right now
    reddit.com/r/AIDungeon | 2021-05-01
    The model is made by Facebook, but it is released publicly and anybody can run it. You can download it and run it on your own machine (if you have the hardware and technical skill, which the vast majority of people probably do not.) You can get it from GitHub here: https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md
    reddit.com/r/AIDungeon | 2021-05-01
  • [2104.01027] Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training
    Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
  • "[D]" Speech to text for a Indigenous language
    I guess Wav2Vec2 would be your best bet for low-resource STT, but don't expect much, especially for conversational speech.
  • Largest publicly-available trained model checkpoint?
    reddit.com/r/mlscaling | 2021-03-15
    This Megatron model is 11B parameters and is trained, supposedly: https://github.com/pytorch/fairseq/tree/master/examples/megatron_11b
  • Zero-3 Offload: Scale DL models to trillion parameters without code changes
    news.ycombinator.com | 2021-03-13
    Support for this was also added to [Fairscale](https://fairscale.readthedocs.io/en/latest/) and [Fairseq](https://github.com/pytorch/fairseq) last week. In particular, the Fairscale implementation can be used in any pyotrch project without requiring the use of the Deepspeed trainer.
  • [D] Where are long-context Transformers?
    There has not been a new long transformer GPT model, nor BERT. NMT frameworks have not incorporated implementations of long transformers (except fairseq with Linformer, but both are from Facebook). Also, in WMT 2020 I think there was a single long transformer (I'm thinking in Marcin Junczys-Dowmunt's "WMT or it didn't happen").
  • What are some good speech recognition papers I can implement?
    fairseq

Stats

Basic fairseq repo stats
17
14,127
9.5
about 13 hours ago

pytorch/fairseq is an open source project licensed under MIT License which is an OSI approved license.

Scout APM: A developer's best friend. Try free for 14-days
Scout APM uses tracing logic that ties bottlenecks to source code so you know the exact line of code causing performance issues and can get back to building a great product faster.
scoutapm.com
Find remote Python jobs at our new job board 99remotejobs.com. There are 10 new remote jobs listed recently.
Are you hiring? Post a new remote job listing for free.