Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work. Learn more →
Fairseq Alternatives
Similar projects and alternatives to fairseq
-
gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
-
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
-
InfluxDB
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
text-to-text-transfer-transformer
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
-
-
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
-
-
taro
开放式跨端跨框架解决方案,支持使用 React/Vue/Nerv 等框架来开发微信/京东/百度/支付宝/字节跳动/ QQ 小程序/H5/React Native 等应用。 https://taro.zone/
-
-
-
widevine-l3-decryptor
A Chrome extension that demonstrates bypassing Widevine L3 DRM
-
Auto.js
Automation&Workflow JavaScript IDE on Android(安卓平台上的自动化工作流JavaScript IDE)
-
complete-javascript-course
Starter files, final projects, and FAQ for my Complete JavaScript course
-
-
stylegan2-pytorch
Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
fairseq reviews and mentions
-
[P] BART denoising language modeling in JAX/Flax
Due to the high demand in implementation for pretraining BART. I created an pretraining script for BART in JAX/Flax. Got approvals to merge into huggingface/transformers. I will archive this repo once it is merged.
-
[D] Hey Reddit! We're a bunch of research scientists and software engineers and we just open sourced a new state-of-the-art AI model that can translate between 200 different languages. We're excited to hear your thoughts so we're hosting an AMA on 07/21/2022 @ 9:00AM PT. Ask Us Anything!
all 202 languages covered by NLLB are already available (models: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling, FLORES and all of the other datasets we created: https://github.com/facebookresearch/flores), including Zulu. You can also try our Zulu translation in the Content Translation tool live on Wikipedia! For the "coming soon" part here, I guess you are talking about the demo? New languages rolling out and will be live in the coming weeks. [angela]
We have a bunch! The model and data are available here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling , LASER3 here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/laser\_distillation , training data here: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/data , FLORES and our other human translated datasets here: https://github.com/facebookresearch/flores , and an entire modular pipeline for data cleaning here: https://github.com/facebookresearch/stopes. It's also available on HuggingFace! [angela]
Yes! We are really motivated by translation as an actual technology that people need (actually, part of our work was interviewing many different native speakers of low-resource languages). As part of that, we do experiment with distillation. That's detailed in Section 8.6 of our paper: https://arxiv.org/pdf/2207.04672.pdf where we compare two different distillation approaches. We also describe how we used distillation to create models that are serving Wikipedia's Content Translation tool (which you can use to write new Wikipedia articles), and then distillation of the full NLLB-200 model. These distilled models are available for download on github: https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/modeling. For your question around productionization, we did partner with our production translation team to integrate the modeling techniques and learnings from the NLLB project into production translation. These are live on Facebook and Instagram today for some languages! [angela]
You can check out some of our materials and open sourced artifacts here: - Our latest blog post: https://ai.facebook.com/blog/nllb-200-high-quality-machine-translation - Project Overview: https://ai.facebook.com/research/no-language-left-behind/ - Product demo: https://nllb.metademolab.com/ - Research paper: https://research.facebook.com/publications/no-language-left-behind - NLLB-200: https://github.com/facebookresearch/fairseq/tree/nllb - FLORES-200: https://github.com/facebookresearch/flores - LASER3: https://github.com/facebookresearch/LASER Joining us today for the AMA are: - Angela Fan (AF), Research Scientist - Jean Maillard (JM), Research Scientist - Maha Elbayad (ME), Research Scientist - Philipp Koehn (PK), Research Scientist - Shruti Bhosale (SB), Software Engineer We’ll be here from 07/21/2022 @09:00AM PT - 10:00AM PT Thanks and we’re looking forward to answering your questions!
- Meta выложила в открытый доступ систему прямого перевода между 204 языками
-
No Language Left Behind
Blog post: https://ai.facebook.com/blog/nllb-200-high-quality-machine-t...
Paper: https://research.facebook.com/publications/no-language-left-...
Github: https://github.com/facebookresearch/fairseq/tree/nllb/
We release several smaller models as well: https://github.com/facebookresearch/fairseq/tree/nllb/exampl... that are 1.3B and 615M parameters. These are usable on smaller GPUs. To create these smaller models but retain good performance, we use knowledge distillation. If you're curious to learn more, we describe the process and results in Section 8.6 of our paper: https://research.facebook.com/publications/no-language-left-...
We tokenize with the flores-200 spm model, correct. To generate from the model, check out the instructions here: https://github.com/facebookresearch/fairseq/tree/nllb/exampl...
-
A note from our sponsor - Sonar
www.sonarsource.com | 1 Feb 2023
Stats
facebookresearch/fairseq is an open source project licensed under MIT License which is an OSI approved license.